Consistent initial conditions for the Saint-Venant equations in river network modeling
Yu, Cheng-Wei; Liu, Frank; Hodges, Ben R.
2017-09-01
Initial conditions for flows and depths (cross-sectional areas) throughout a river network are required for any time-marching (unsteady) solution of the one-dimensional (1-D) hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths). These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1) the pseudo time-marching method (PTM) that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2) the steady-solution method (SSM) that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.
Self-consisting modeling of entangled network strands and dangling ends
DEFF Research Database (Denmark)
Jensen, Mette Krog; Schieber, Jay D.; Khaliullin, Renat N.
2009-01-01
Text of Abstract We seek knowledge about the effect of dangling ends and soluble structures of stoichiometrically imbalanced networks. To interpretate our recent experimental results we seek a molecular model that can predict LVE data. The discrete slip-link model (DSM) has proven to be a robust...... tool for LVE and non-linear rheology predictions for linear chains, and it is thus used to analyze the experimental results. We divide the LVE predictions into three domains; 1) the low frequency region, where G' is a plateau, G0, 2) the intermediate frequency region, where G' and G'' are parallel...... and 3) the high frequency region, where G' levels off to an entanglement plateau, GN0, close to that of the linear polymer. The latter region is seldom obtained in experiments, while it is obtained in simulations since these start at zero time. Initially we consider a stoichiometrically balanced network...
Consistent dust and gas models for protoplanetary disks. II. Chemical networks and rates
Kamp, I.; Thi, W.-F.; Woitke, P.; Rab, C.; Bouma, S.; Ménard, F.
2017-11-01
Aims: We aim to define a small and large chemical network which can be used for the quantitative simultaneous analysis of molecular emission from the near-IR to the submm. We also aim to revise reactions of excited molecular hydrogen, which are not included in UMIST, to provide a homogeneous database for future applications. Methods: We have used the thermo-chemical disk modeling code ProDiMo and a standard T Tauri disk model to evaluate the impact of various chemical networks, reaction rate databases and sets of adsorption energies on a large sample of chemical species and emerging line fluxes from the near-IR to the submm wavelength range. Results: We find large differences in the masses and radial distribution of ice reservoirs when considering freeze-out on bare or polar ice coated grains. Most strongly the ammonia ice mass and the location of the snow line (water) change. As a consequence molecules associated to the ice lines such as N2H+ change their emitting region; none of the line fluxes in the sample considered here changes by more than 25% except CO isotopologues, CN and N2H+ lines. The three-body reaction N+H2+M plays a key role in the formation of water in the outer disk. Besides that, differences between the UMIST 2006 and 2012 database change line fluxes in the sample considered here by less than a factor of two (a subset of low excitation CO and fine structure lines stays even within 25%); exceptions are OH, CN, HCN, HCO+ and N2H+ lines. However, different networks such as OSU and KIDA 2011 lead to pronounced differences in the chemistry inside 100 au and thus affect emission lines from high excitation CO, OH and CN lines. H2 is easily excited at the disk surface and state-to-state reactions enhance the abundance of CH+ and to a lesser extent HCO+. For sub-mm lines of HCN, N2H+ and HCO+, a more complex larger network is recommended. Conclusions: More work is required to consolidate data on key reactions leading to the formation of water, molecular
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-06-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in situ through self-calibration. In the network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine the parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the two-face method and the length-consistency method. The length-consistency method is proposed as a more efficient way of realizing the network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the length-consistency method. We compare the two-face method, the length-consistency method, and the network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process.
Fission gas bubble percolation on crystallographically consistent grain boundary networks
Energy Technology Data Exchange (ETDEWEB)
Sabogal-Suárez, Daniel; David Alzate-Cardona, Juan, E-mail: jdalzatec@unal.edu.co; Restrepo-Parra, Elisabeth
2016-07-15
Fission gas release in nuclear fuels can be modeled in the framework of percolation theory, where each grain boundary is classified as open or closed to the release of the fission gas. In the present work, two-dimensional grain boundary networks were assembled both at random and in a crystallographically consistent manner resembling a general textured microstructure. In the crystallographically consistent networks, grain boundaries were classified according to its misorientation. The percolation behavior of the grain boundary networks was evaluated as a function of radial cracks and radial thermal gradients in the fuel pellet. Percolation thresholds tend to shift to the left with increasing length and number of cracks, especially in the presence of thermal gradients. In general, the topology and percolation behavior of the crystallographically consistent networks differs from those of the random network. - Highlights: • Fission gas release in nuclear fuels was studied in the framework of percolation theory. • The nuclear fuel cross-section microstructure was modeled through grain boundary networks. • The grain boundaries were classified randomly or according to its crystallography. • Differences in topology and percolation behavior for both kinds networks were determined.
Consistent ranking of volatility models
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2006-01-01
result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...
Modeling and Testing Legacy Data Consistency Requirements
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard
2003-01-01
. This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...
Path lengths in tree-child time consistent hybridization networks
Cardona, Gabriel; Rossello, Francesc; Valiente, Gabriel
2008-01-01
Hybridization networks are representations of evolutionary histories that allow for the inclusion of reticulate events like recombinations, hybridizations, or lateral gene transfers. The recent growth in the number of hybridization network reconstruction algorithms has led to an increasing interest in the definition of metrics for their comparison that can be used to assess the accuracy or robustness of these methods. In this paper we establish some basic results that make it possible the generalization to tree-child time consistent (TCTC) hybridization networks of some of the oldest known metrics for phylogenetic trees: those based on the comparison of the vectors of path lengths between leaves. More specifically, we associate to each hybridization network a suitably defined vector of `splitted' path lengths between its leaves, and we prove that if two TCTC hybridization networks have the same such vectors, then they must be isomorphic. Thus, comparing these vectors by means of a metric for real-valued vecto...
Developing consistent pronunciation models for phonemic variants
CSIR Research Space (South Africa)
Davel, M
2006-09-01
Full Text Available from a lexicon containing variants. In this paper we (the authors) address both these issues by creating ‘pseudo-phonemes’ associated with sets of ‘generation restriction rules’ to model those pronunciations that are consistently realised as two or more...
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...... maximum wave height is statistically consistent with the directional distribution functions. Finally, it is shown how the stochastic models can be used to estimate characteristic values and in reliability assessment of offshore structures....... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
Decentralized Consistent Network Updates in SDN with ez-Segway
Nguyen, Thanh Dang
2017-03-06
We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.
Stable functional networks exhibit consistent timing in the human brain.
Chapeton, Julio I; Inati, Sara K; Zaghloul, Kareem A
2017-03-01
Despite many advances in the study of large-scale human functional networks, the question of timing, stability, and direction of communication between cortical regions has not been fully addressed. At the cellular level, neuronal communication occurs through axons and dendrites, and the time required for such communication is well defined and preserved. At larger spatial scales, however, the relationship between timing, direction, and communication between brain regions is less clear. Here, we use a measure of effective connectivity to identify connections between brain regions that exhibit communication with consistent timing. We hypothesized that if two brain regions are communicating, then knowledge of the activity in one region should allow an external observer to better predict activity in the other region, and that such communication involves a consistent time delay. We examine this question using intracranial electroencephalography captured from nine human participants with medically refractory epilepsy. We use a coupling measure based on time-lagged mutual information to identify effective connections between brain regions that exhibit a statistically significant increase in average mutual information at a consistent time delay. These identified connections result in sparse, directed functional networks that are stable over minutes, hours, and days. Notably, the time delays associated with these connections are also highly preserved over multiple time scales. We characterize the anatomic locations of these connections, and find that the propagation of activity exhibits a preferred posterior to anterior temporal lobe direction, consistent across participants. Moreover, networks constructed from connections that reliably exhibit consistent timing between anatomic regions demonstrate features of a small-world architecture, with many reliable connections between anatomically neighbouring regions and few long range connections. Together, our results demonstrate
Consistent quadrupole-octupole collective model
Dobrowolski, A.; Mazurek, K.; Góźdź, A.
2016-11-01
Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
Collaborative networks: Reference modeling
Camarinha-Matos, L.M.; Afsarmanesh, H.
2008-01-01
Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of
Rubber elasticity for percolation network consisting of Gaussian chains
Energy Technology Data Exchange (ETDEWEB)
Nishi, Kengo, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp; Noguchi, Hiroshi; Shibayama, Mitsuhiro, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp [Institute for Solid State Physics, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8581 (Japan); Sakai, Takamasa, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp [Department of Bioengineering, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-11-14
A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G{sub 0}, must be equal to G/G{sub 0} = (p − 2/f)/(1 − 2/f) if the position of sites can be determined so as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.
Mechanical behavior in living cells consistent with the tensegrity model
Wang, N.; Naruse, K.; Stamenovic, D.; Fredberg, J. J.; Mijailovich, S. M.; Tolic-Norrelykke, I. M.; Polte, T.; Mannix, R.; Ingber, D. E.
2001-01-01
Alternative models of cell mechanics depict the living cell as a simple mechanical continuum, porous filament gel, tensed cortical membrane, or tensegrity network that maintains a stabilizing prestress through incorporation of discrete structural elements that bear compression. Real-time microscopic analysis of cells containing GFP-labeled microtubules and associated mitochondria revealed that living cells behave like discrete structures composed of an interconnected network of actin microfilaments and microtubules when mechanical stresses are applied to cell surface integrin receptors. Quantitation of cell tractional forces and cellular prestress by using traction force microscopy confirmed that microtubules bear compression and are responsible for a significant portion of the cytoskeletal prestress that determines cell shape stability under conditions in which myosin light chain phosphorylation and intracellular calcium remained unchanged. Quantitative measurements of both static and dynamic mechanical behaviors in cells also were consistent with specific a priori predictions of the tensegrity model. These findings suggest that tensegrity represents a unified model of cell mechanics that may help to explain how mechanical behaviors emerge through collective interactions among different cytoskeletal filaments and extracellular adhesions in living cells.
A Multilayer Model of Computer Networks
Shchurov, Andrey A.
2015-01-01
The fundamental concept of applying the system methodology to network analysis declares that network architecture should take into account services and applications which this network provides and supports. This work introduces a formal model of computer networks on the basis of the hierarchical multilayer networks. In turn, individual layers are represented as multiplex networks. The concept of layered networks provides conditions of top-down consistency of the model. Next, we determined the...
Modeling the citation network by network cosmology.
Xie, Zheng; Ouyang, Zhenzheng; Zhang, Pengyuan; Yi, Dongyun; Kong, Dexing
2015-01-01
Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.
Modeling the citation network by network cosmology.
Directory of Open Access Journals (Sweden)
Zheng Xie
Full Text Available Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.
DEFF Research Database (Denmark)
Andersen, Kasper Winther
Three main topics are presented in this thesis. The first and largest topic concerns network modelling of functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). In particular nonparametric Bayesian methods are used to model brain networks derived from resting state f...... for their ability to reproduce node clustering and predict unseen data. Comparing the models on whole brain networks, BCD and IRM showed better reproducibility and predictability than IDM, suggesting that resting state networks exhibit community structure. This also points to the importance of using models, which...... allow for complex interactions between all pairs of clusters. In addition, it is demonstrated how the IRM can be used for segmenting brain structures into functionally coherent clusters. A new nonparametric Bayesian network model is presented. The model builds upon the IRM and can be used to infer...
Consistency in use through model based user interface development
Trapp, M.; Schmettow, M.
2006-01-01
In dynamic environments envisioned under the concept of Ambient Intelligence the consistency of user interfaces is of particular importance. To encounter this, the variability of the environment has to be transformed to a coherent user experience. In this paper we explain several dimension of consistency and present our ideas and recent results on achieving adaptive and consistent user interfaces by exploiting the technology of model driven user interface development.
Artificial neural network modelling
Samarasinghe, Sandhya
2016-01-01
This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .
Modeling network technology deployment rates with different network models
Chung, Yoo
2011-01-01
To understand the factors that encourage the deployment of a new networking technology, we must be able to model how such technology gets deployed. We investigate how network structure influences deployment with a simple deployment model and different network models through computer simulations. The results indicate that a realistic model of networking technology deployment should take network structure into account.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically ...
Standard Model Vacuum Stability and Weyl Consistency Conditions
DEFF Research Database (Denmark)
Antipin, Oleg; Gillioz, Marc; Krog, Jens
2013-01-01
At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....
Thermodynamic consistency and fast dynamics in phase field crystal modeling
Cheng, Mowei; Cottenier, Stefaan; Emmerich, Heike
2008-01-01
A general formulation is presented to derive the equation of motion and to demonstrate thermodynamic consistency for several classes of phase field models at once. It applies to models with a conserved phase field, describing either uniform or periodic stable states, and containing slow as well as fast thermodynamic variables. The approach is based on an entropy functional formalism previously developed in the context of phase field models for uniform states [P. Galenko and D. Jou, Phys. Rev....
Modeling a Consistent Behavior of PLC-Sensors
Directory of Open Access Journals (Sweden)
E. V. Kuzmin
2014-01-01
Full Text Available The article extends the cycle of papers dedicated to programming and verificatoin of PLC-programs by LTL-specification. This approach provides the availability of correctness analysis of PLC-programs by the model checking method.The model checking method needs to construct a finite model of a PLC program. For successful verification of required properties it is important to take into consideration that not all combinations of input signals from the sensors can occur while PLC works with a control object. This fact requires more advertence to the construction of the PLC-program model.In this paper we propose to describe a consistent behavior of sensors by three groups of LTL-formulas. They will affect the program model, approximating it to the actual behavior of the PLC program. The idea of LTL-requirements is shown by an example.A PLC program is a description of reactions on input signals from sensors, switches and buttons. In constructing a PLC-program model, the approach to modeling a consistent behavior of PLC sensors allows to focus on modeling precisely these reactions without an extension of the program model by additional structures for realization of a realistic behavior of sensors. The consistent behavior of sensors is taken into account only at the stage of checking a conformity of the programming model to required properties, i. e. a property satisfaction proof for the constructed model occurs with the condition that the model contains only such executions of the program that comply with the consistent behavior of sensors.
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre M.; Parks, Michael L.
2017-04-01
We present a consistent implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The accuracy and convergence of the consistent I2SPH are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Global Consistency Management Methods Based on Escrow Approaches in Mobile ad Hoc Networks
Directory of Open Access Journals (Sweden)
Takahiro Hara
2010-01-01
Full Text Available In a mobile ad hoc network, consistency management of data operations on replicas is a crucial issue for system performance. In our previous work, we classified several primitive consistency levels according to the requirements from applications and provided protocols to realize them. In this paper, we assume special types of applications in which the instances of each data item can be partitioned and propose two consistency management protocols which are combinations of an escrow method and our previously proposed protocols. We also report simulation results to investigate the characteristics of these protocols in a mobile ad hoc network. From the simulation results, we confirm that the protocols proposed in this paper drastically improve data availability and reduce the traffic for data operations while maintaining the global consistency in the entire network.
Modeling Epidemic Network Failures
DEFF Research Database (Denmark)
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...
Consistency of network modules in resting-state FMRI connectome data.
Directory of Open Access Journals (Sweden)
Malaak N Moussa
Full Text Available At rest, spontaneous brain activity measured by fMRI is summarized by a number of distinct resting state networks (RSNs following similar temporal time courses. Such networks have been consistently identified across subjects using spatial ICA (independent component analysis. Moreover, graph theory-based network analyses have also been applied to resting-state fMRI data, identifying similar RSNs, although typically at a coarser spatial resolution. In this work, we examined resting-state fMRI networks from 194 subjects at a voxel-level resolution, and examined the consistency of RSNs across subjects using a metric called scaled inclusivity (SI, which summarizes consistency of modular partitions across networks. Our SI analyses indicated that some RSNs are robust across subjects, comparable to the corresponding RSNs identified by ICA. We also found that some commonly reported RSNs are less consistent across subjects. This is the first direct comparison of RSNs between ICAs and graph-based network analyses at a comparable resolution.
Consistent estimation of linear panel data models with measurement error
Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas
2017-01-01
Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the
Final Report Fermionic Symmetries and Self consistent Shell Model
Energy Technology Data Exchange (ETDEWEB)
Larry Zamick
2008-11-07
In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with "anomoulous" magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them.The importance of a self consistent shell model was emphasized.
Generalized Self-Consistency: Multinomial logit model and Poisson likelihood.
Tsodikov, Alex; Chefo, Solomon
2008-01-01
A generalized self-consistency approach to maximum likelihood estimation (MLE) and model building was developed in (Tsodikov, 2003) and applied to a survival analysis problem. We extend the framework to obtain second-order results such as information matrix and properties of the variance. Multinomial model motivates the paper and is used throughout as an example. Computational challenges with the multinomial likelihood motivated Baker (1994) to develop the Multinomial-Poisson (MP) transformation for a large variety of regression models with multinomial likelihood kernel. Multinomial regression is transformed into a Poisson regression at the cost of augmenting model parameters and restricting the problem to discrete covariates. Imposing normalization restrictions by means of Lagrange multipliers (Lang, 1996) justifies the approach. Using the self-consistency framework we develop an alternative solution to multinomial model fitting that does not require augmenting parameters while allowing for a Poisson likelihood and arbitrary covariate structures. Normalization restrictions are imposed by averaging over artificial "missing data" (fake mixture). Lack of probabilistic interpretation at the "complete-data" level makes the use of the generalized self-consistency machinery essential.
Simplified models for dark matter face their consistent completions
Energy Technology Data Exchange (ETDEWEB)
Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel
2017-03-01
Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically ...... on S&P 500 across strikes and maturities as well as options on the VIX volatility index. The calibration of the model is done in two steps, first by matching VIX option prices and then by matching prices of options on the underlying....
Detection and quantification of flow consistency in business process models
DEFF Research Database (Denmark)
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel
2017-01-01
, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics...... addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance...
Towards consistent nuclear models and comprehensive nuclear data evaluations
Energy Technology Data Exchange (ETDEWEB)
Bouland, O [Los Alamos National Laboratory; Hale, G M [Los Alamos National Laboratory; Lynn, J E [Los Alamos National Laboratory; Talou, P [Los Alamos National Laboratory; Bernard, D [FRANCE; Litaize, O [FRANCE; Noguere, G [FRANCE; De Saint Jean, C [FRANCE; Serot, O [FRANCE
2010-01-01
The essence of this paper is to enlighten the consistency achieved nowadays in nuclear data and uncertainties assessments in terms of compound nucleus reaction theory from neutron separation energy to continuum. Making the continuity of theories used in resolved (R-matrix theory), unresolved resonance (average R-matrix theory) and continuum (optical model) rangcs by the generalization of the so-called SPRT method, consistent average parameters are extracted from observed measurements and associated covariances are therefore calculated over the whole energy range. This paper recalls, in particular, recent advances on fission cross section calculations and is willing to suggest some hints for future developments.
Spiking modular neural networks: A neural network modeling approach for hydrological processes
National Research Council Canada - National Science Library
Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey
2006-01-01
.... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...
Structure and internal consistency of a shoulder model.
Högfors, C; Karlsson, D; Peterson, B
1995-07-01
A three-dimensional biomechanical model of the shoulder is developed for force predictions in 46 shoulder structures. The model is directed towards the analysis of static working situations where the load is low or moderate. Arbitrary static arm postures in the natural shoulder range may be considered, as well as different kinds of external loads including different force and moment directions. The model can predict internal forces for the shoulder muscles, for the glenohumeral, the acromioclavicular and the sternoclavicular joint as well as for the coracohumeral ligament. A solution to the statistically indeterminate force system is obtained by minimising an objective function. The default function chosen for this is the sum of the squared muscle stresses, but other objective functions may be used as well. The structure of the model is described and its ingredients discussed. The internal consistency of the model, its structural stability and the compatibility of the elements that go into it, is investigated.
Connectome-scale group-wise consistent resting-state network analysis in autism spectrum disorder
Directory of Open Access Journals (Sweden)
Yu Zhao
2016-01-01
Full Text Available Understanding the organizational architecture of human brain function and its alteration patterns in diseased brains such as Autism Spectrum Disorder (ASD patients are of great interests. In-vivo functional magnetic resonance imaging (fMRI offers a unique window to investigate the mechanism of brain function and to identify functional network components of the human brain. Previously, we have shown that multiple concurrent functional networks can be derived from fMRI signals using whole-brain sparse representation. Yet it is still an open question to derive group-wise consistent networks featured in ASD patients and controls. Here we proposed an effective volumetric network descriptor, named connectivity map, to compactly describe spatial patterns of brain network maps and implemented a fast framework in Apache Spark environment that can effectively identify group-wise consistent networks in big fMRI dataset. Our experiment results identified 144 group-wisely common intrinsic connectivity networks (ICNs shared between ASD patients and healthy control subjects, where some ICNs are substantially different between the two groups. Moreover, further analysis on the functional connectivity and spatial overlap between these 144 common ICNs reveals connectomics signatures characterizing ASD patients and controls. In particular, the computing time of our Spark-enabled functional connectomics framework is significantly reduced from 240 hours (C++ code, single core to 20 hours, exhibiting a great potential to handle fMRI big data in the future.
Consistency Across Standards or Standards in a New Business Model
Russo, Dane M.
2010-01-01
Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.
GENESIS: new self-consistent models of exoplanetary spectra
Gandhi, Siddharth; Madhusudhan, Nikku
2017-12-01
We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Cont, Rama; Kokholm, Thomas
2013-01-01
We propose a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index. Our model reproduces various empirically observed properties of variance swap dynamics and enables volatility derivatives and options on the underlying index...... to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
Thermodynamically consistent model of brittle oil shales under overpressure
Izvekov, Oleg
2016-04-01
The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.
Shirakigawa, Nana; Takei, Takayuki; Ijima, Hiroyuki
2013-12-01
Reconstructed liver has been desired as a liver substitute for transplantation. However, reconstruction of a whole liver has not been achieved because construction of a vascular network at an organ scale is very difficult. We focused on decellularized liver (DC-liver) as an artificial scaffold for the construction of a hierarchical vascular network. In this study, we obtained DC-liver and the tubular network structure in which both portal vein and hepatic vein systems remained intact. Furthermore, endothelialization of the tubular structure in DC-liver was achieved, which prevented blood leakage from the tubular structure. In addition, hepatocytes suspended in a collagen sol were injected from the surroundings using a syringe as a suitable procedure for liver cell inoculation. In summary, we developed a base structure consisting of an endothelialized vascular-tree network and hepatocytes for whole liver engineering. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
A Dynamic Linear Hashing Method for Redundancy Management in Train Ethernet Consist Network
Directory of Open Access Journals (Sweden)
Xiaobo Nie
2016-01-01
Full Text Available Massive transportation systems like trains are considered critical systems because they use the communication network to control essential subsystems on board. Critical system requires zero recovery time when a failure occurs in a communication network. The newly published IEC62439-3 defines the high-availability seamless redundancy protocol, which fulfills this requirement and ensures no frame loss in the presence of an error. This paper adopts these for train Ethernet consist network. The challenge is management of the circulating frames, capable of dealing with real-time processing requirements, fast switching times, high throughout, and deterministic behavior. The main contribution of this paper is the in-depth analysis it makes of network parameters imposed by the application of the protocols to train control and monitoring system (TCMS and the redundant circulating frames discarding method based on a dynamic linear hashing, using the fastest method in order to resolve all the issues that are dealt with.
Mean-field theory and self-consistent dynamo modeling
Energy Technology Data Exchange (ETDEWEB)
Yoshizawa, Akira; Yokoi, Nobumitsu [Tokyo Univ. (Japan). Inst. of Industrial Science; Itoh, Sanae-I [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)
2001-12-01
Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)
Models of educational institutions' networking
Shilova Olga Nikolaevna
2015-01-01
The importance of educational institutions' networking in modern sociocultural conditions and a definition of networking in education are presented in the article. The results of research levels, methods and models of educational institutions' networking are presented and substantially disclosed.
Network Models of Mechanical Assemblies
Whitney, Daniel E.
Recent network research has sought to characterize complex systems with a number of statistical metrics, such as power law exponent (if any), clustering coefficient, community behavior, and degree correlation. Use of such metrics represents a choice of level of abstraction, a balance of generality and detailed accuracy. It has been noted that "social networks" consistently display clustering coefficients that are higher than those of random or generalized random networks, that they have small world properties such as short path lengths, and that they have positive degree correlations (assortative mixing). "Technological" or "non-social" networks display many of these characteristics except that they generally have negative degree correlations (disassortative mixing). [Newman 2003i] In this paper we examine network models of mechanical assemblies. Such systems are well understood functionally. We show that there is a cap on their average nodal degree and that they have negative degree correlations (disassortative mixing). We identify specific constraints arising from first principles, their structural patterns, and engineering practice that suggest why they have these properties. In addition, we note that their main "motif" is closed loops (as it is for electric and electronic circuits), a pattern that conventional network analysis does not detect but which is used by software intended to aid in the design of such systems.
A self-consistent spin-diffusion model for micromagnetics
Abert, Claas
2016-12-17
We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.
Schreiber, Shaun; Geldenhuys, Jaco; Villiers, De Hendrik
2017-01-01
Procedural texture generation enables the creation of more rich and detailed virtual environments without the help of an artist. However, finding a flexible generative model of real world textures remains an open problem. We present a novel Convolutional Neural Network based texture model
A self-consistent upward leader propagation model
Energy Technology Data Exchange (ETDEWEB)
Becerra, Marley; Cooray, Vernon [Division for Electricity and Lightning Research, Angstroem Laboratory, Uppsala University, SE 751 21, Box 534, Uppsala(Sweden)
2006-08-21
The knowledge of the initiation and propagation of an upward moving connecting leader in the presence of a downward moving lightning stepped leader is a must in the determination of the lateral attraction distance of a lightning flash by any grounded structure. Even though different models that simulate this phenomenon are available in the literature, they do not take into account the latest developments in the physics of leader discharges. The leader model proposed here simulates the advancement of positive upward leaders by appealing to the presently understood physics of that process. The model properly simulates the upward continuous progression of the positive connecting leaders from its inception to the final connection with the downward stepped leader (final jump). Thus, the main physical properties of upward leaders, namely the charge per unit length, the injected current, the channel gradient and the leader velocity are self-consistently obtained. The obtained results are compared with an altitude triggered lightning experiment and there is good agreement between the model predictions and the measured leader current and the experimentally inferred spatial and temporal location of the final jump. It is also found that the usual assumption of constant charge per unit length, based on laboratory experiments, is not valid for lightning upward connecting leaders.
Techniques for Modelling Network Security
Lech Gulbinovič
2012-01-01
The article compares modelling techniques for network security, including the theory of probability, Markov processes, Petri networks and application of stochastic activity networks. The paper introduces the advantages and disadvantages of the above proposed methods and accepts the method of modelling the network of stochastic activity as one of the most relevant. The stochastic activity network allows modelling the behaviour of the dynamic system where the theory of probability is inappropri...
A consistent model for tsunami actions on buildings
Foster, A.; Rossetto, T.; Eames, I.; Chandler, I.; Allsop, W.
2016-12-01
The Japan (2011) and Indian Ocean (2004) tsunami resulted in significant loss of life, buildings, and critical infrastructure. The tsunami forces imposed upon structures in coastal regions are initially due to wave slamming, after which the quasi-steady flow of the sea water around buildings becomes important. An essential requirement in both design and loss assessment is a consistent model that can accurately predict these forces. A model suitable for predicting forces in the in the quasi-steady range has been established as part of a systematic programme of research by the UCL EPICentre to understand the fundamental physical processes of tsunami actions on buildings, and more generally their social and economic consequences. Using the pioneering tsunami generator at HR Wallingford, this study considers the influence of unsteady flow conditions on the forces acting upon a rectangular building occupying 10-80% of a channel for 20-240 second wave periods. A mathematical model based upon basic open-channel flow principles is proposed, which provides empirical estimates for drag and hydrostatic coefficients. A simple force prediction equation, requiring only basic flow velocity and wave height inputs is then developed, providing good agreement with the experimental results. The results of this study demonstrate that the unsteady forces from the very long waves encountered during tsunami events can be predicted with a level of accuracy and simplicity suitable for design and risk assessment.
Classical and Quantum Consistency of the DGP Model
Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo
2004-01-01
We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...
Economic costs of power interruptions: a consistent model and methodology
Energy Technology Data Exchange (ETDEWEB)
Ghajar, Raymond F. [School of Engineering and Architecture Lebanese American University P.O. Box 36, Byblos (Lebanon); Billinton, Roy [Power Systems Research Group University of Saskatchewan Saskatoon, Sask., S7N 5A9 (Canada)
2006-01-15
One of the most basic requirements in cost/benefit assessments of generation and transmission systems are the costs incurred by customers due to power interruptions. This paper provides a consistent set of cost of interruption data that can be used to assess the reliability worth of a power system. In addition to this basic data, methodologies for calculating the customer damage functions and the interrupted energy assessment rates for individual load points in the system and for the entire service area are also presented. The proposed model and methodology are illustrated by application to the IEEE-reliability test system (IEEE-RTS) [A Report Prepared by the Reliability Test System Task Force of the Application of Probability Methods Subcommittee, IEEE Reliability Test System, IEEE Trans. on PAS, Vol. PAS-98, No.6, pp. 2047-2054, November/December 1979. [1
Self-Consistent Dynamical Model of the Broad Line Region
Directory of Open Access Journals (Sweden)
Bozena Czerny
2017-06-01
Full Text Available We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.
Self-consistent dynamical model of the Broad Line Region
Czerny, Bozena; Li, Yan-Rong; Sredzinska, Justyna; Hryniewicz, Krzysztof; Panda, Swayam; Wildy, Conor; Karas, Vladimir
2017-06-01
We develope a self-consistent description of the Broad Line Region based on the concept of the failed wind powered by the radiation pressure acting on dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaportes, and the matter falls back towards the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, just the gravitational field of the central black hole, which results in assymetry between the rise and fall. Knowledge of the dynamics allows to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.
Self-Consistent and Time-Dependent Solar Wind Models
Ong, K. K.; Musielak, Z. E.; Rosner, R.; Suess, S. T.; Sulkanen, M. E.
1997-01-01
We describe the first results from a self-consistent study of Alfven waves for the time-dependent, single-fluid magnetohydrodynamic (MHD) solar wind equations, using a modified version of the ZEUS MHD code. The wind models we examine are radially symmetrical and magnetized; the initial outflow is described by the standard Parker wind solution. Our study focuses on the effects of Alfven waves on the outflow and is based on solving the full set of the ideal nonlinear MHD equations. In contrast to previous studies, no assumptions regarding wave linearity, wave damping, and wave-flow interaction are made; thus, the models naturally account for the back-reaction of the wind on the waves, as well as for the nonlinear interaction between different types of MHD waves. Our results clearly demonstrate when momentum deposition by Alfven waves in the solar wind can be sufficient to explain the origin of fast streams in solar coronal holes; we discuss the range of wave amplitudes required to obtained such fast stream solutions.
Modeling Network Interdiction Tasks
2015-09-17
allow professionals and families to stay in touch through voice or video calls. Power grids provide electricity to homes , offices, and recreational...instances using IBMr ILOGr CPLEXr Optimization Studio V12.6. For each instance, two solutions are deter- mined. First, the MNDP-a model is solved with no...three values: 0.25, 0.50, or 0.75. The DMP-a model is solved for the various random network instances using IBMr ILOGr CPLEXr Optimization Studio V12.6
Coevolutionary modeling in network formation
Al-Shyoukh, Ibrahim
2014-12-03
Network coevolution, the process of network topology evolution in feedback with dynamical processes over the network nodes, is a common feature of many engineered and natural networks. In such settings, the change in network topology occurs at a comparable time scale to nodal dynamics. Coevolutionary modeling offers the possibility to better understand how and why network structures emerge. For example, social networks can exhibit a variety of structures, ranging from almost uniform to scale-free degree distributions. While current models of network formation can reproduce these structures, coevolutionary modeling can offer a better understanding of the underlying dynamics. This paper presents an overview of recent work on coevolutionary models of network formation, with an emphasis on the following three settings: (i) dynamic flow of benefits and costs, (ii) transient link establishment costs, and (iii) latent preferential attachment.
Do Network Models Just Model Networks? On The Applicability of Network-Oriented Modeling
Treur, J.; Shmueli, Erez
2017-01-01
In this paper for a Network-Oriented Modelling perspective based on temporal-causal networks it is analysed how generic and applicable it is as a general modelling approach and as a computational paradigm. This results in an answer to the question in the title different from: network models just
Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach
Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco
2015-01-01
Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901
Chen, Sharon; Ross, Thomas J; Zhan, Wang; Myers, Carol S; Chuang, Keh-Shih; Heishman, Stephen J; Stein, Elliot A; Yang, Yihong
2008-11-06
Group independent component analysis (gICA) was performed on resting-state data from 14 healthy subjects scanned on 5 fMRI scan sessions across 16 days. The data were reduced and aggregated in 3 steps using Principal Components Analysis (PCA, within scan, within session and across session) and subjected to gICA procedures. The amount of reduction was estimated by an improved method that utilizes a first-order autoregressive fitting technique to the PCA spectrum. Analyses were performed using all sessions in order to maximize sensitivity and alleviate the problem of component identification across session. Across-session consistency was examined by three methods, all using back-reconstruction of the single-session or single-subject/session maps from the grand (5-session) maps. The gICA analysis produced 55 spatially independent maps. Obvious artifactual maps were eliminated and the remainder were grouped based upon physiological recognizability. Biologically relevant component maps were found, including sensory, motor and a 'default-mode' map. All analysis methods showed that components were remarkably consistent across session. Critically, the components with the most obvious physiological relevance were the most consistent. The consistency of these maps suggests that, at least over a period of several weeks, these networks would be useful to follow longitudinal treatment-related manipulations.
Methods for Improving Consistency between Statewide and Regional Planning Models.
2017-12-01
Given the difference in scope of statewide and MPO models, inconsistencies between the two levels of modelling are inevitable. There are, however, methods to reduce these inconsistencies. This research provides insight into the current practices of s...
Self-consistent approach for neutral community models with speciation
Haegeman, Bart; Etienne, Rampal S.
Hubbell's neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is
A Consistent Pricing Model for Index Options and Volatility Derivatives
DEFF Research Database (Denmark)
Kokholm, Thomas
to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...
Evaluation of EOR Processes Using Network Models
DEFF Research Database (Denmark)
Larsen, Jens Kjell; Krogsbøll, Anette
1998-01-01
The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels...
Consistency between 2D-3D Sediment Transport models
Villaret, Catherine; Jodeau, Magali
2017-04-01
Sediment transport models have been developed and applied by the engineering community to estimate transport rates and morphodynamic bed evolutions in river flows, coastal and estuarine conditions. Environmental modelling systems like the open-source Telemac modelling system include a hierarchy of models from 1D (Mascaret), 2D (Telemac-2D/Sisyphe) and 3D (Telemac-3D/Sedi-3D) and include a wide range of processes to represent sediment flow interactions under more and more complex situations (cohesive, non-cohesive and mixed sediment). Despite some tremendous progresses in the numerical techniques and computing resources, the quality/accuracy of model results mainly depend on the numerous choices and skills of the modeler. In complex situations involving stratification effects, complex geometry, recirculating flows… 2D model assumptions are no longer valid. A full 3D turbulent flow model is then required in order to capture the vertical mixing processes and to represent accurately the coupled flow/sediment distribution. However a number of theoretical and numerical difficulties arise when dealing with sediment transport modelling in 3D which will be high-lighted : (1) Dependency of model results to the vertical grid refinement and choice of boundary conditions and numerical scheme (2) The choice of turbulence model determines also the sediment vertical distribution which is governed by a balance between the downward settling term and upward turbulent diffusion. (3) The use of different numerical schemes for both hydrodynamics (mean and turbulent flow) and sediment transport modelling can lead to some inconsistency including a mismatch in the definition of numerical cells and definition of boundary conditions. We discuss here those present issues and present some detailed comparison between 2D and 3D simulations on a set of validation test cases which are available in the Telemac 7.2 release using both cohesive and non-cohesive sediments.
Is the island universe model consistent with observations?
Piao, Yun-Song
2005-01-01
We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.
MATHEW: a mass-consistent wind field model
Energy Technology Data Exchange (ETDEWEB)
Sherman, C.S.
1978-05-01
MATHEW, a regional three-dimensional time-independent wind field model, utilizes a variational analysis technique to determine a three-component non-divergent velocity field which can be used to provide the advection velocities in atmospheric pollutant transport and diffusion models. The regions of interest have horizontal distances of 10 to 200 km and extend less than 2 km above topography.
Self-consistent modelling of resonant tunnelling structures
DEFF Research Database (Denmark)
Fiig, T.; Jauho, A.P.
1992-01-01
We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... with the Tsu-Esaki formula. We consider the formation of the accumulation layer in the emitter contact layer in a number of different approximation schemes, and we introduce a novel way to account for the energy relaxation of continuum states to the two-dimensional quasi-bound states appearing for contain...
The pairwise phase consistency in cortical network and its relationship with neuronal activation
Directory of Open Access Journals (Sweden)
Wang Daming
2017-01-01
Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.
Software-based microwave CT system consisting of antennas and vector network analyzer.
Ogawa, Takahiro; Miyakawa, Michio
2011-01-01
We have developed a software-based microwave CT (SMCT) that consists of antennas and a vector network analyzer. Regardless of the scanner type, SMCT collects the S-parameters at each measurement position in the frequency range of interest. After collecting all the S-parameters, it calculates the shortest path to obtain the projection data for CPMCT. Because of the redundant data in SMCT, the calculation of the projection is easily optimized. Therefore, the system can improve the accuracy and stability of the measurement. Furthermore, the experimental system is constructed at a reasonable cost. Hence, SMCT is useful for imaging experiments for CP-MCT and particularly for basic studies. This paper describes the software-based microwave imaging system, and experimental results show the usefulness of the system.
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre M.; Parks, Michael L.
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Directory of Open Access Journals (Sweden)
Damiano Monelli
2010-11-01
Full Text Available We present here two self-consistent implementations of a short-term earthquake probability (STEP model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1 for one implementation (STEP-LG, the original model parameterization and estimation is used; 2 for the other (STEP-NG, the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events.
Connectivity-consistent mapping method for 2-D discrete fracture networks
Roubinet, Delphine; de Dreuzy, Jean-Raynald; Davy, Philippe
2010-07-01
We present a new flow computation method in 2-D discrete fracture networks (DFN) intermediary between the classical DFN flow simulation method and the projection onto continuous grids. The method divides the simulation complexity by solving for flows successively at a local mesh scale and at the global domain scale. At the local mesh scale, flows are determined by classical DFN flow simulations and approximated by an equivalent hydraulic matrix (EHM) relating heads and flow rates discretized on the mesh borders. Assembling the equivalent hydraulic matrices provides for a domain-scale discretization of the flow equation. The equivalent hydraulic matrices transfer the connectivity and flow structure complexities from the local mesh scale to the domain scale. Compared to existing geometrical mapping or equivalent tensor methods, the EHM method broadens the simulation range of flow to all types of 2-D fracture networks both below and above the representative elementary volume (REV). Additional computation linked to the derivation of the local mesh-scale equivalent hydraulic matrices increases the accuracy and reliability of the method. Compared to DFN methods, the EHM method first provides a simpler domain-scale alternative permeability model. Second, it enhances the simulation capacities to larger fracture networks where flow discretization on the DFN structure yields system sizes too large to be solved using the most advanced multigrid and multifrontal methods. We show that the EHM method continuously moves from the DFN method to the tensor representation as a function of the local mesh-scale discretization. The balance between accuracy and model simplification can be optimally controlled by adjusting the domain-scale and local mesh-scale discretizations.
Spectrally-consistent regularization modeling of turbulent natural convection flows
Trias, F. Xavier; Verstappen, Roel; Gorobets, Andrey; Oliva, Assensi
2012-01-01
The incompressible Navier-Stokes equations constitute an excellent mathematical modelization of turbulence. Unfortunately, attempts at performing direct simulations are limited to relatively low-Reynolds numbers because of the almost numberless small scales produced by the non-linear convective
Flood damage: a model for consistent, complete and multipurpose scenarios
Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia
2016-12-01
Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
Modeling semiflexible polymer networks
Broedersz, Chase P.; MacKintosh, Fred C.
2014-01-01
Here, we provide an overview of theoretical approaches to semiflexible polymers and their networks. Such semiflexible polymers have large bending rigidities that can compete with the entropic tendency of a chain to crumple up into a random coil. Many studies on semiflexible polymers and their assemblies have been motivated by their importance in biology. Indeed, crosslinked networks of semiflexible polymers form a major structural component of tissue and living cells. Reconstituted networks o...
Consistency problems for Heath-Jarrow-Morton interest rate models
Filipović, Damir
2001-01-01
The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...
Kutepov, A L
2015-08-12
Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.
Complex Networks in Psychological Models
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Butsick, Andrew J; Wood, Jonathan S; Jovanis, Paul P
2017-09-01
The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Developing Personal Network Business Models
DEFF Research Database (Denmark)
Saugstrup, Dan; Henten, Anders
2006-01-01
on the 'state of the art' in the field of business modeling. Furthermore, the paper suggests three generic business models for PNs: a service oriented model, a self-organized model, and a combination model. Finally, examples of relevant services and applications in relation to three different cases......The aim of the paper is to examine the issue of business modeling in relation to personal networks, PNs. The paper builds on research performed on business models in the EU 1ST MAGNET1 project (My personal Adaptive Global NET). The paper presents the Personal Network concept and briefly reports...... are presented and analyzed in light of business modeling of PN....
A model of coauthorship networks
Zhou, Guochang; Li, Jianping; Xie, Zonglin
2017-10-01
A natural way of representing the coauthorship of authors is to use a generalization of graphs known as hypergraphs. A random geometric hypergraph model is proposed here to model coauthorship networks, which is generated by placing nodes on a region of Euclidean space randomly and uniformly, and connecting some nodes if the nodes satisfy particular geometric conditions. Two kinds of geometric conditions are designed to model the collaboration patterns of academic authorities and basic researches respectively. The conditions give geometric expressions of two causes of coauthorship: the authority and similarity of authors. By simulation and calculus, we show that the forepart of the degree distribution of the network generated by the model is mixture Poissonian, and the tail is power-law, which are similar to these of some coauthorship networks. Further, we show more similarities between the generated network and real coauthorship networks: the distribution of cardinalities of hyperedges, high clustering coefficient, assortativity, and small-world property
Lee, Timothy; Diehl, Tobias; Kissling, Edi; Wiemer, Stefan
2017-04-01
Earthquake catalogs derived from several decades of observations are often biased by network geometries, location procedures, and data quality changing with time. To study the long-term spatio-temporal behavior of seismogenic fault zones at high-resolution, a consistent homogenization and improvement of earthquake catalogs is required. Assuming that data quality and network density generally improves with time, procedures are needed, which use the best available data to homogeneously solve the coupled hypocenter - velocity structure problem and can be as well applied to earlier network configurations in the same region. A common approach to uniformly relocate earthquake catalogs is the calculation of a so-called "minimum 1D" model, which is derived from the simultaneous inversion for hypocenters and 1D velocity structure, including station specific delay-time corrections. In this work, we will present strategies using the principles of the "minimum 1D" model to consistently relocate hypocenters recorded by the Swiss Seismological Service (SED) in the Swiss Alps over a period of 17 years in a region, which is characterized by significant changes in network configurations. The target region of this study is the Rawil depression, which is located between the Aar and Mont Blanc massifs in southwestern Switzerland. The Rhone-Simplon Fault is located to the south of the Rawil depression and is considered as a dextral strike-slip fault representing the dominant tectonic boundary between Helvetic nappes to the north and Penninic nappes to the south. Current strike-slip earthquakes, however, occur predominantly in a narrow, east-west striking cluster located in the Rawil depression north of the Rhone-Simplon Fault. Recent earthquake swarms near Sion and Sierre in 2011 and 2016, on the other hand, indicate seismically active dextral faults close to the Rhone valley. The region north and south of the Rhone-Simplon Fault is one of the most seismically active regions in
2016-11-09
standpoint remains more of an art than a science . Even when well executed, the ongoing evolution of the network may violate initial, security-critical design...from a security standpoint remains more of an art than a science . Even when well executed, the ongoing evolution of the network may violate initial...is outside the scope of this paper. As such, we focus on event probabilities. The output of the network porosity model is a stream of timestamped
Telecommunications network modelling, planning and design
Evans, Sharon
2003-01-01
Telecommunication Network Modelling, Planning and Design addresses sophisticated modelling techniques from the perspective of the communications industry and covers some of the major issues facing telecommunications network engineers and managers today. Topics covered include network planning for transmission systems, modelling of SDH transport network structures and telecommunications network design and performance modelling, as well as network costs and ROI modelling and QoS in 3G networks.
Campus network security model study
Zhang, Yong-ku; Song, Li-ren
2011-12-01
Campus network security is growing importance, Design a very effective defense hacker attacks, viruses, data theft, and internal defense system, is the focus of the study in this paper. This paper compared the firewall; IDS based on the integrated, then design of a campus network security model, and detail the specific implementation principle.
Mathematical model for spreading dynamics of social network worms
Sun, Xin; Liu, Yan-Heng; Li, Bin; Li, Jin; Han, Jia-Wei; Liu, Xue-Jie
2012-04-01
In this paper, a mathematical model for social network worm spreading is presented from the viewpoint of social engineering. This model consists of two submodels. Firstly, a human behavior model based on game theory is suggested for modeling and predicting the expected behaviors of a network user encountering malicious messages. The game situation models the actions of a user under the condition that the system may be infected at the time of opening a malicious message. Secondly, a social network accessing model is proposed to characterize the dynamics of network users, by which the number of online susceptible users can be determined at each time step. Several simulation experiments are carried out on artificial social networks. The results show that (1) the proposed mathematical model can well describe the spreading dynamics of social network worms; (2) weighted network topology greatly affects the spread of worms; (3) worms spread even faster on hybrid social networks.
Neural network modeling of emotion
Levine, Daniel S.
2007-03-01
This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.
Aggregated wind power plant models consisting of IEC wind turbine models
DEFF Research Database (Denmark)
Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela
2015-01-01
speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...
Modeling semiflexible polymer networks
Broedersz, C.P.; MacKintosh, F.C.
2014-01-01
This is an overview of theoretical approaches to semiflexible polymers and their networks. Such semiflexible polymers have large bending rigidities that can compete with the entropic tendency of a chain to crumple up into a random coil. Many studies on semiflexible polymers and their assemblies have
A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection
Directory of Open Access Journals (Sweden)
Chia-Hua Cheng
2017-08-01
Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.
Modeling Multistandard Wireless Networks in OPNET
DEFF Research Database (Denmark)
Zakrzewska, Anna; Berger, Michael Stübert; Ruepp, Sarah Renée
2011-01-01
Future wireless communication is emerging towards one heterogeneous platform. In this new environment wireless access will be provided by multiple radio technologies that are cooperating and complementing one another. The paper investigates the possibilities of developing such a multistandard...... system using OPNET Modeler. A network model consisting of LTE interworking with WLAN and WiMAX is considered from the radio resource management perspective. In particular, implementing a joint packet scheduler across multiple systems is discussed more in detail....
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
A Transfer Learning Approach for Network Modeling
Huang, Shuai; Li, Jing; Chen, Kewei; Wu, Teresa; Ye, Jieping; Wu, Xia; Yao, Li
2012-01-01
Networks models have been widely used in many domains to characterize the interacting relationship between physical entities. A typical problem faced is to identify the networks of multiple related tasks that share some similarities. In this case, a transfer learning approach that can leverage the knowledge gained during the modeling of one task to help better model another task is highly desirable. In this paper, we propose a transfer learning approach, which adopts a Bayesian hierarchical model framework to characterize task relatedness and additionally uses the L1-regularization to ensure robust learning of the networks with limited sample sizes. A method based on the Expectation-Maximization (EM) algorithm is further developed to learn the networks from data. Simulation studies are performed, which demonstrate the superiority of the proposed transfer learning approach over single task learning that learns the network of each task in isolation. The proposed approach is also applied to identification of brain connectivity networks of Alzheimer’s disease (AD) from functional magnetic resonance image (fMRI) data. The findings are consistent with the AD literature. PMID:24526804
An Analysis of Weakly Consistent Replication Systems in an Active Distributed Network
Amit Chougule; Pravin Ghewari
2011-01-01
With the sudden increase in heterogeneity and distribution of data in wide-area networks, more flexible, efficient and autonomous approaches for management and data distribution are needed. In recent years, the proliferation of inter-networks and distributed applications has increased the demand for geographically-distributed replicated databases. The architecture of Bayou provides features that address the needs of database storage of world-wide applications. Key is the use of weak consisten...
Mobility Model for Tactical Networks
Rollo, Milan; Komenda, Antonín
In this paper a synthetic mobility model which represents behavior and movement pattern of heterogeneous units in disaster relief and battlefield scenarios is proposed. These operations usually take place in environment without preexisting communication infrastructure and units thus have to be connected by wireless communication network. Units cooperate to fulfill common tasks and communication network has to serve high amount of communication requests, especially data, voice and video stream transmissions. To verify features of topology control, routing and interaction protocols software simulations are usually used, because of their scalability, repeatability and speed. Behavior of all these protocols relies on the mobility model of the network nodes, which has to resemble real-life movement pattern. Proposed mobility model is goal-driven and provides support for various types of units, group mobility and realistic environment model with obstacles. Basic characteristics of the mobility model like node spatial distribution and average node degree were analyzed.
Modelling freeway networks by hybrid stochastic models
Boel, R.; Mihaylova, L.
2004-01-01
Traffic flow on freeways is a nonlinear, many-particle phenomenon, with complex interactions between the vehicles. This paper presents a stochastic hybrid model of freeway traffic at a time scale and at a level of detail suitable for on-line flow estimation, for routing and ramp metering control. The model describes the evolution of continuous and discrete state variables. The freeway is considered as a network of components, each component representing a different section of the network. The...
Modeling regulatory networks with weight matrices
DEFF Research Database (Denmark)
Weaver, D.C.; Workman, Christopher; Stormo, Gary D.
1999-01-01
Systematic gene expression analyses provide comprehensive information about the transcriptional responseto different environmental and developmental conditions. With enough gene expression data points,computational biologists may eventually generate predictive computer models of transcription...... regulation.Such models will require computational methodologies consistent with the behavior of known biologicalsystems that remain tractable. We represent regulatory relationships between genes as linear coefficients orweights, with the "net" regulation influence on a gene's expression being...... the mathematical summation of theindependent regulatory inputs. Test regulatory networks generated with this approach display stable andcyclically stable gene expression levels, consistent with known biological systems. We include variables tomodel the effect of environmental conditions on transcription regulation...
Phenomenological network models: Lessons for epilepsy surgery.
Hebbink, Jurgen; Meijer, Hil; Huiskamp, Geertjan; van Gils, Stephan; Leijten, Frans
2017-10-01
The current opinion in epilepsy surgery is that successful surgery is about removing pathological cortex in the anatomic sense. This contrasts with recent developments in epilepsy research, where epilepsy is seen as a network disease. Computational models offer a framework to investigate the influence of networks, as well as local tissue properties, and to explore alternative resection strategies. Here we study, using such a model, the influence of connections on seizures and how this might change our traditional views of epilepsy surgery. We use a simple network model consisting of four interconnected neuronal populations. One of these populations can be made hyperexcitable, modeling a pathological region of cortex. Using model simulations, the effect of surgery on the seizure rate is studied. We find that removal of the hyperexcitable population is, in most cases, not the best approach to reduce the seizure rate. Removal of normal populations located at a crucial spot in the network, the "driver," is typically more effective in reducing seizure rate. This work strengthens the idea that network structure and connections may be more important than localizing the pathological node. This can explain why lesionectomy may not always be sufficient. © 2017 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.
Network model of security system
Directory of Open Access Journals (Sweden)
Adamczyk Piotr
2016-01-01
Full Text Available The article presents the concept of building a network security model and its application in the process of risk analysis. It indicates the possibility of a new definition of the role of the network models in the safety analysis. Special attention was paid to the development of the use of an algorithm describing the process of identifying the assets, vulnerability and threats in a given context. The aim of the article is to present how this algorithm reduced the complexity of the problem by eliminating from the base model these components that have no links with others component and as a result and it was possible to build a real network model corresponding to reality.
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
ICFD modeling of final settlers - developing consistent and effective simulation model structures
DEFF Research Database (Denmark)
Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham
Summary of key findings The concept of interpreted computational fluid dynamic (iCFD) modelling and the development methodology are presented (Fig. 1). The 1-D advection-dispersion model along with the statistically generated, meta-model for pseudo-dispersion constitutes the newly developed i...... for the SST through the proposed methodology is able to predict solid distribution with high accuracy -- taking a reasonable computational effort -- when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. The iCFD models developed are intended to comply...... with the consistent modelling methodology (1). iCFD tools could play an important role in reliably predicting WWTP performance under normal and shock-loading (7). Background and relevance System analysis tools typically comprise numerous sub-models, identified so that the computational effort taken through system...
Dems, Maciej; Beling, Piotr; Gebski, Marcin; Piskorski, Łukasz; Walczak, Jarosław; Kuc, Maciej; Frasunkiewicz, Leszek; Wasiak, Michał; Sarzała, Robert; Czyszanowski, Tomasz
2015-03-01
In the talk we show the process of modeling complete physical properties of VCSELs and we present a step-by-step development of its complete multi-physics model, gradually improving its accuracy. Then we introduce high contrast gratings to the VCSEL design, which strongly complicates its optical modeling, making the comprehensive multi-physics VCSEL simulation a challenging task. We show, however, that a proper choice of a self-consistent simulation algorithm can still make such a simulation a feasible one, which is necessary for an efficient optimization of the laser prior to its costly manufacturing.
QSAR modelling using combined simple competitive learning networks and RBF neural networks.
Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E
2018-04-01
The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.
A new k-epsilon model consistent with Monin-Obukhov similarity theory
DEFF Research Database (Denmark)
van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.
2017-01-01
A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...
Data modeling of network dynamics
Jaenisch, Holger M.; Handley, James W.; Faucheux, Jeffery P.; Harris, Brad
2004-01-01
This paper highlights Data Modeling theory and its use for text data mining as a graphical network search engine. Data Modeling is then used to create a real-time filter capable of monitoring network traffic down to the port level for unusual dynamics and changes in business as usual. This is accomplished in an unsupervised fashion without a priori knowledge of abnormal characteristics. Two novel methods for converting streaming binary data into a form amenable to graphics based search and change detection are introduced. These techniques are then successfully applied to 1999 KDD Cup network attack data log-on sessions to demonstrate that Data Modeling can detect attacks without prior training on any form of attack behavior. Finally, two new methods for data encryption using these ideas are proposed.
Social network models predict movement and connectivity in ecological landscapes
Fletcher, Robert J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, Wiley M.
2011-01-01
Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.
A Complex Network Approach to Distributional Semantic Models.
Directory of Open Access Journals (Sweden)
Akira Utsumi
Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.
Directory of Open Access Journals (Sweden)
Meric Ataman
2017-07-01
Full Text Available Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these "consistently-reduced" models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models.
Directory of Open Access Journals (Sweden)
Caroline Ghyoot
2017-07-01
Full Text Available Mixotrophy, i.e., the ability to combine phototrophy and phagotrophy in one organism, is now recognized to be widespread among photic-zone protists and to potentially modify the structure and functioning of planktonic ecosystems. However, few biogeochemical/ecological models explicitly include this mode of nutrition, owing to the large diversity of observed mixotrophic types, the few data allowing the parameterization of physiological processes, and the need to make the addition of mixotrophy into existing ecosystem models as simple as possible. We here propose and discuss a flexible model that depicts the main observed behaviors of mixotrophy in microplankton. A first model version describes constitutive mixotrophy (the organism photosynthesizes by use of its own chloroplasts. This model version offers two possible configurations, allowing the description of constitutive mixotrophs (CMs that favor either phototrophy or heterotrophy. A second version describes non-constitutive mixotrophy (the organism performs phototrophy by use of chloroplasts acquired from its prey. The model variants were described so as to be consistent with a plankton conceptualization in which the biomass is divided into separate components on the basis of their biochemical function (Shuter-approach; Shuter, 1979. The two model variants of mixotrophy can easily be implemented in ecological models that adopt the Shuter-approach, such as the MIRO model (Lancelot et al., 2005, and address the challenges associated with modeling mixotrophy.
Cang, X.; Luo, W.
2017-10-01
The junction angles on Earth formed under different climatic conditions are different. Here, we investigated the junction angles on Mars. The results are consistent with a “warm” early Mars climate with precipitation.
Guinot, Vincent
2017-09-01
The Integral Porosity and Dual Integral Porosity two-dimensional shallow water models have been proposed recently as efficient upscaled models for urban floods. Very little is known so far about their consistency and wave propagation properties. Simple numerical experiments show that both models are unusually sensitive to the computational grid. In the present paper, a two-dimensional consistency and characteristic analysis is carried out for these two models. The following results are obtained: (i) the models are almost insensitive to grid design when the porosity is isotropic, (ii) anisotropic porosity fields induce an artificial polarization of the mass/momentum fluxes along preferential directions when triangular meshes are used and (iii) extra first-order derivatives appear in the governing equations when regular, quadrangular cells are used. The hyperbolic system is thus mesh-dependent, and with it the wave propagation properties of the model solutions. Criteria are derived to make the solution less mesh-dependent, but it is not certain that these criteria can be satisfied at all computational points when real-world situations are dealt with.
Thermal Network Modelling Handbook
1972-01-01
Thermal mathematical modelling is discussed in detail. A three-fold purpose was established: (1) to acquaint the new user with the terminology and concepts used in thermal mathematical modelling, (2) to present the more experienced and occasional user with quick formulas and methods for solving everyday problems, coupled with study cases which lend insight into the relationships that exist among the various solution techniques and parameters, and (3) to begin to catalog in an orderly fashion the common formulas which may be applied to automated conversational language techniques.
Service entity network virtualization architecture and model
Jin, Xue-Guang; Shou, Guo-Chu; Hu, Yi-Hong; Guo, Zhi-Gang
2017-07-01
Communication network can be treated as a complex network carrying a variety of services and service can be treated as a network composed of functional entities. There are growing interests in multiplex service entities where individual entity and link can be used for different services simultaneously. Entities and their relationships constitute a service entity network. In this paper, we introduced a service entity network virtualization architecture including service entity network hierarchical model, service entity network model, service implementation and deployment of service entity networks. Service entity network oriented multiplex planning model were also studied and many of these multiplex models were characterized by a significant multiplex of the links or entities in different service entity network. Service entity networks were mapped onto shared physical resources by dynamic resource allocation controller. The efficiency of the proposed architecture was illustrated in a simulation environment that allows for comparative performance evaluation. The results show that, compared to traditional networking architecture, this architecture has a better performance.
Polymer networks: Modeling and applications
Masoud, Hassan
Polymer networks are an important class of materials that are ubiquitously found in natural, biological, and man-made systems. The complex mesoscale structure of these soft materials has made it difficult for researchers to fully explore their properties. In this dissertation, we introduce a coarse-grained computational model for permanently cross-linked polymer networks than can properly capture common properties of these materials. We use this model to study several practical problems involving dry and solvated networks. Specifically, we analyze the permeability and diffusivity of polymer networks under mechanical deformations, we examine the release of encapsulated solutes from microgel capsules during volume transitions, and we explore the complex tribological behavior of elastomers. Our simulations reveal that the network transport properties are defined by the network porosity and by the degree of network anisotropy due to mechanical deformations. In particular, the permeability of mechanically deformed networks can be predicted based on the alignment of network filaments that is characterized by a second order orientation tensor. Moreover, our numerical calculations demonstrate that responsive microcapsules can be effectively utilized for steady and pulsatile release of encapsulated solutes. We show that swollen gel capsules allow steady, diffusive release of nanoparticles and polymer chains, whereas gel deswelling causes burst-like discharge of solutes driven by an outward flow of the solvent initially enclosed within a shrinking capsule. We further demonstrate that this hydrodynamic release can be regulated by introducing rigid microscopic rods in the capsule interior. We also probe the effects of velocity, temperature, and normal load on the sliding of elastomers on smooth and corrugated substrates. Our friction simulations predict a bell-shaped curve for the dependence of the friction coefficient on the sliding velocity. Our simulations also illustrate
A last updating evolution model for online social networks
Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui
2013-05-01
As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.
Target-Centric Network Modeling
DEFF Research Database (Denmark)
Mitchell, Dr. William L.; Clark, Dr. Robert M.
In Target-Centric Network Modeling: Case Studies in Analyzing Complex Intelligence Issues, authors Robert Clark and William Mitchell take an entirely new approach to teaching intelligence analysis. Unlike any other book on the market, it offers case study scenarios using actual intelligence......, and collaborative sharing in the process of creating a high-quality, actionable intelligence product. The case studies reflect the complexity of twenty-first century intelligence issues by dealing with multi-layered target networks that cut across political, economic, social, technological, and military issues....... Working through these cases, students will learn to manage and evaluate realistic intelligence accounts....
CNEM: Cluster Based Network Evolution Model
Directory of Open Access Journals (Sweden)
Sarwat Nizamani
2015-01-01
Full Text Available This paper presents a network evolution model, which is based on the clustering approach. The proposed approach depicts the network evolution, which demonstrates the network formation from individual nodes to fully evolved network. An agglomerative hierarchical clustering method is applied for the evolution of network. In the paper, we present three case studies which show the evolution of the networks from the scratch. These case studies include: terrorist network of 9/11 incidents, terrorist network of WMD (Weapons Mass Destruction plot against France and a network of tweets discussing a topic. The network of 9/11 is also used for evaluation, using other social network analysis methods which show that the clusters created using the proposed model of network evolution are of good quality, thus the proposed method can be used by law enforcement agencies in order to further investigate the criminal networks
Biological transportation networks: Modeling and simulation
Albi, Giacomo
2015-09-15
We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H.H.G.; Gascuel-Odoux, C.
2014-01-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased
Neural Network Model of memory retrieval
Directory of Open Access Journals (Sweden)
Stefano eRecanatesi
2015-12-01
Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.
Behavioral Consistency of C and Verilog Programs Using Bounded Model Checking
2003-05-01
We present an algorithm that checks behavioral consistency between an ANSI-C program and a circuit given in Verilog using Bounded Model Checking . Both...behavioral consistency between an ANSI-C program and a circuit given in Verilog using Bounded Model Checking . Both the circuit and the program are unwound
Mathematical Modelling Plant Signalling Networks
Muraro, D.
2013-01-01
During the last two decades, molecular genetic studies and the completion of the sequencing of the Arabidopsis thaliana genome have increased knowledge of hormonal regulation in plants. These signal transduction pathways act in concert through gene regulatory and signalling networks whose main components have begun to be elucidated. Our understanding of the resulting cellular processes is hindered by the complex, and sometimes counter-intuitive, dynamics of the networks, which may be interconnected through feedback controls and cross-regulation. Mathematical modelling provides a valuable tool to investigate such dynamics and to perform in silico experiments that may not be easily carried out in a laboratory. In this article, we firstly review general methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more comprehensive modelling studies of hormonal transport and signalling in a multi-scale setting. © EDP Sciences, 2013.
Energy modelling in sensor networks
Directory of Open Access Journals (Sweden)
D. Schmidt
2007-06-01
Full Text Available Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
Probabilistic logic modeling of network reliability for hybrid network architectures
Energy Technology Data Exchange (ETDEWEB)
Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.
1996-10-01
Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.
Generalization performance of regularized neural network models
DEFF Research Database (Denmark)
Larsen, Jan; Hansen, Lars Kai
1994-01-01
Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...
Plant Growth Models Using Artificial Neural Networks
Bubenheim, David
1997-01-01
In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...
Precise Network Modeling of Systems Genetics Data Using the Bayesian Network Webserver.
Ziebarth, Jesse D; Cui, Yan
2017-01-01
The Bayesian Network Webserver (BNW, http://compbio.uthsc.edu/BNW ) is an integrated platform for Bayesian network modeling of biological datasets. It provides a web-based network modeling environment that seamlessly integrates advanced algorithms for probabilistic causal modeling and reasoning with Bayesian networks. BNW is designed for precise modeling of relatively small networks that contain less than 20 nodes. The structure learning algorithms used by BNW guarantee the discovery of the best (most probable) network structure given the data. To facilitate network modeling across multiple biological levels, BNW provides a very flexible interface that allows users to assign network nodes into different tiers and define the relationships between and within the tiers. This function is particularly useful for modeling systems genetics datasets that often consist of multiscalar heterogeneous genotype-to-phenotype data. BNW enables users to, within seconds or minutes, go from having a simply formatted input file containing a dataset to using a network model to make predictions about the interactions between variables and the potential effects of experimental interventions. In this chapter, we will introduce the functions of BNW and show how to model systems genetics datasets with BNW.
Modeling the Dynamics of Compromised Networks
Energy Technology Data Exchange (ETDEWEB)
Soper, B; Merl, D M
2011-09-12
Accurate predictive models of compromised networks would contribute greatly to improving the effectiveness and efficiency of the detection and control of network attacks. Compartmental epidemiological models have been applied to modeling attack vectors such as viruses and worms. We extend the application of these models to capture a wider class of dynamics applicable to cyber security. By making basic assumptions regarding network topology we use multi-group epidemiological models and reaction rate kinetics to model the stochastic evolution of a compromised network. The Gillespie Algorithm is used to run simulations under a worst case scenario in which the intruder follows the basic connection rates of network traffic as a method of obfuscation.
RMBNToolbox: random models for biochemical networks
Directory of Open Access Journals (Sweden)
Niemi Jari
2007-05-01
Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.
A CVAR scenario for a standard monetary model using theory-consistent expectations
DEFF Research Database (Denmark)
Juselius, Katarina
2017-01-01
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...... the long persistent swings in the real exchange rate and the interest rate differential....
Network Bandwidth Utilization Forecast Model on High Bandwidth Network
Energy Technology Data Exchange (ETDEWEB)
Yoo, Wucherl; Sim, Alex
2014-07-07
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.
Network bandwidth utilization forecast model on high bandwidth networks
Energy Technology Data Exchange (ETDEWEB)
Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-03-30
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.
An acoustical model based monitoring network
Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der
2010-01-01
In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V
2014-09-27
Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.
An adaptive complex network model for brain functional networks.
Directory of Open Access Journals (Sweden)
Ignacio J Gomez Portillo
Full Text Available Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution.
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model
Hazan, Aurélien
2016-01-01
We show that a steady-state stock-flow consistent macroeconomic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to specify the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model
Hazan, Aurélien
2017-05-01
We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
Modeling gene regulatory networks: A network simplification algorithm
Ferreira, Luiz Henrique O.; de Castro, Maria Clicia S.; da Silva, Fabricio A. B.
2016-12-01
Boolean networks have been used for some time to model Gene Regulatory Networks (GRNs), which describe cell functions. Those models can help biologists to make predictions, prognosis and even specialized treatment when some disturb on the GRN lead to a sick condition. However, the amount of information related to a GRN can be huge, making the task of inferring its boolean network representation quite a challenge. The method shown here takes into account information about the interactome to build a network, where each node represents a protein, and uses the entropy of each node as a key to reduce the size of the network, allowing the further inferring process to focus only on the main protein hubs, the ones with most potential to interfere in overall network behavior.
Validation study of the magnetically self-consistent inner magnetosphere model RAM-SCB
Yu, Yiqun; Jordanova, Vania; Zaharia, Sorin; Koller, Josef; Zhang, Jichun; Kistler, Lynn M.
2012-03-01
The validation of the magnetically self-consistent inner magnetospheric model RAM-SCB developed at Los Alamos National Laboratory is presented here. The model consists of two codes: a kinetic ring current-atmosphere interaction model (RAM) and a 3-D equilibrium magnetic field code (SCB). The validation is conducted by simulating two magnetic storm events and then comparing the model results against a variety of satellite in situ observations, including the magnetic field from Cluster and Polar spacecraft, ion differential flux from the Cluster/CODIF (Composition and Distribution Function) analyzer, and the ground-based SYM-H index. The model prediction of the magnetic field is in good agreement with observations, which indicates the model's capability of representing well the inner magnetospheric field configuration. This provides confidence for the RAM-SCB model to be utilized for field line and drift shell tracing, which are needed in radiation belt studies. While the SYM-H index, which reflects the total ring current energy content, is generally reasonably reproduced by the model using the Weimer electric field model, the modeled ion differential flux clearly depends on the electric field strength, local time, and magnetic activity level. A self-consistent electric field approach may be needed to improve the model performance in this regard.
Estimating long-term volatility parameters for market-consistent models
African Journals Online (AJOL)
Contemporary actuarial and accounting practices (APN 110 in the South African context) require the use of market-consistent models for the valuation of embedded investment derivatives. These models have to be calibrated with accurate and up-to-date market data. Arguably, the most important variable in the valuation of ...
Self-consistent dynamical models for early-type galaxies in the CALIFA Survey
Posti, L.; van de Ven, G.; Binney, J.; Nipoti, C.; Ciotti, L.
2016-01-01
We present the first application of self-consistent, continuous models with distribution functions (DFs) depending on the action integrals, to a sample of nearby early-type galaxies in the CALIFA Survey. Each model is axisymmetric, flattened, anisotropic and rotating and the total gravitational
The model of social crypto-network
Directory of Open Access Journals (Sweden)
Марк Миколайович Орел
2015-06-01
Full Text Available The article presents the theoretical model of social network with the enhanced mechanism of privacy policy. It covers the problems arising in the process of implementing the mentioned type of network. There are presented the methods of solving problems arising in the process of building the social network with privacy policy. It was built a theoretical model of social networks with enhanced information protection methods based on information and communication blocks
Entropy Characterization of Random Network Models
Directory of Open Access Journals (Sweden)
Pedro J. Zufiria
2017-06-01
Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.
The model of social crypto-network
Марк Миколайович Орел
2015-01-01
The article presents the theoretical model of social network with the enhanced mechanism of privacy policy. It covers the problems arising in the process of implementing the mentioned type of network. There are presented the methods of solving problems arising in the process of building the social network with privacy policy. It was built a theoretical model of social networks with enhanced information protection methods based on information and communication blocks
Modeling Diagnostic Assessments with Bayesian Networks
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego
2007-01-01
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Bayesian Network Webserver: a comprehensive tool for biological network modeling.
Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan
2013-11-01
The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.
Geostatistically consistent history matching of 3D oil-and-gas reservoir models
Zakirov, E. S.; Indrupskiy, I. M.; Liubimova, O. V.; Shiriaev, I. M.; Anikeev, D. P.
2017-10-01
An approach for geostatistically consistent matching of 3D flow simulation models and 3D geological models is proposed. This approach uses an optimization algorithm based on identification of the parameters of the geostatistical model (for example, the variogram parameters, such as range, sill, and nugget effect). Here, the inverse problem is considered in the greatest generality taking into account facies heterogeneity and the variogram anisotropy. The correlation dependence parameters (porosity-to-log permeability) are clarified for each single facies.
Toward a formalized account of attitudes: The Causal Attitude Network (CAN) Model
Dalege, J.; Borsboom, D.; Harreveld, F. van; Berg, H. van den; Conner, M.; Maas, H.L.J. van der
2016-01-01
This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions
Toward a formalized account of attitudes: the Causal Attitude Network (CAN) model
Dalege, J.; Borsboom, D.; van Harreveld, F.; van den Berg, H.; Conner, M.; van der Maas, H.L.J.
2016-01-01
This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions
An approach to identify time consistent model parameters: sub-period calibration
Directory of Open Access Journals (Sweden)
S. Gharari
2013-01-01
Full Text Available Conceptual hydrological models rely on calibration for the identification of their parameters. As these models are typically designed to reflect real catchment processes, a key objective of an appropriate calibration strategy is the determination of parameter sets that reflect a "realistic" model behavior. Previous studies have shown that parameter estimates for different calibration periods can be significantly different. This questions model transposability in time, which is one of the key conditions for the set-up of a "realistic" model. This paper presents a new approach that selects parameter sets that provide a consistent model performance in time. The approach consists of testing model performance in different periods, and selecting parameter sets that are as close as possible to the optimum of each individual sub-period. While aiding model calibration, the approach is also useful as a diagnostic tool, illustrating tradeoffs in the identification of time-consistent parameter sets. The approach is applied to a case study in Luxembourg using the HyMod hydrological model as an example.
Castet, Jean-Francois; Saleh, Joseph H
2013-01-01
This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the
Directory of Open Access Journals (Sweden)
Jean-Francois Castet
Full Text Available This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also
A computational model of hemodynamic parameters in cortical capillary networks.
Safaeian, Navid; Sellier, Mathieu; David, Tim
2011-02-21
The analysis of hemodynamic parameters and functional reactivity of cerebral capillaries is still controversial. To assess the hemodynamic parameters in the cortical capillary network, a generic model was created using 2D voronoi tessellation in which each edge represents a capillary segment. This method is capable of creating an appropriate generic model of cerebral capillary network relating to each part of the brain cortex because the geometric model is able to vary the capillary density. The modeling presented here is based on morphometric parameters extracted from physiological data of the human cortex. The pertinent hemodynamic parameters were obtained by numerical simulation based on effective blood viscosity as a function of hematocrit and microvessel diameter, phase separation and plasma skimming effects. The hemodynamic parameters of capillary networks with two different densities (consistent with the variation of the morphometric data in the human cortical capillary network) were analyzed. The results show pertinent hemodynamic parameters for each model. The heterogeneity (coefficient variation) and the mean value of hematocrits, flow rates and velocities of the both network models were specified. The distributions of blood flow throughout the both models seem to confirm the hypothesis in which all capillaries in a cortical network are recruited at rest (normal condition). The results also demonstrate a discrepancy of the network resistance between two models, which are derived from the difference in the number density of capillary segments between the models. Copyright Â© 2010 Elsevier Ltd. All rights reserved.
Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion
Directory of Open Access Journals (Sweden)
Muhammad Imran
2017-11-01
Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.
A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems
Directory of Open Access Journals (Sweden)
R. Dimitri
2014-07-01
Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.
Object Oriented Modeling Of Social Networks
Zeggelink, Evelien P.H.; Oosten, Reinier van; Stokman, Frans N.
1996-01-01
The aim of this paper is to explain principles of object oriented modeling in the scope of modeling dynamic social networks. As such, the approach of object oriented modeling is advocated within the field of organizational research that focuses on networks. We provide a brief introduction into the
Bayesian estimation of the network autocorrelation model
Dittrich, D.; Leenders, R.T.A.J.; Mulder, J.
2017-01-01
The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of
Agent-based modeling and network dynamics
Namatame, Akira
2016-01-01
The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...
A model of yeast glycolysis based on a consistent kinetic characterisation of all its enzymes.
Smallbone, Kieran; Messiha, Hanan L; Carroll, Kathleen M; Winder, Catherine L; Malys, Naglis; Dunn, Warwick B; Murabito, Ettore; Swainston, Neil; Dada, Joseph O; Khan, Farid; Pir, Pınar; Simeonidis, Evangelos; Spasić, Irena; Wishart, Jill; Weichart, Dieter; Hayes, Neil W; Jameson, Daniel; Broomhead, David S; Oliver, Stephen G; Gaskell, Simon J; McCarthy, John E G; Paton, Norman W; Westerhoff, Hans V; Kell, Douglas B; Mendes, Pedro
2013-09-02
We present an experimental and computational pipeline for the generation of kinetic models of metabolism, and demonstrate its application to glycolysis in Saccharomyces cerevisiae. Starting from an approximate mathematical model, we employ a "cycle of knowledge" strategy, identifying the steps with most control over flux. Kinetic parameters of the individual isoenzymes within these steps are measured experimentally under a standardised set of conditions. Experimental strategies are applied to establish a set of in vivo concentrations for isoenzymes and metabolites. The data are integrated into a mathematical model that is used to predict a new set of metabolite concentrations and reevaluate the control properties of the system. This bottom-up modelling study reveals that control over the metabolic network most directly involved in yeast glycolysis is more widely distributed than previously thought. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Application Interaction Model for Opportunistic Networking
de Souza Schwartz, Ramon; van Dijk, H.W.; Scholten, Johan
In Opportunistic Networks, autonomous nodes discover, assess and potentially seize opportunities for communication and distributed processing whenever these emerge. In this paper, we consider prerequisites for a successful implementation of such a way of processing in networks that consist mainly of
Modeling data throughput on communication networks
Energy Technology Data Exchange (ETDEWEB)
Eldridge, J.M.
1993-11-01
New challenges in high performance computing and communications are driving the need for fast, geographically distributed networks. Applications such as modeling physical phenomena, interactive visualization, large data set transfers, and distributed supercomputing require high performance networking [St89][Ra92][Ca92]. One measure of a communication network`s performance is the time it takes to complete a task -- such as transferring a data file or displaying a graphics image on a remote monitor. Throughput, defined as the ratio of the number of useful data bits transmitted per the time required to transmit those bits, is a useful gauge of how well a communication system meets this performance measure. This paper develops and describes an analytical model of throughput. The model is a tool network designers can use to predict network throughput. It also provides insight into those parts of the network that act as a performance bottleneck.
Spatial coincidence modelling, automated database updating and data consistency in vector GIS
Kufoniyi, O.
1995-01-01
This thesis presents formal approaches for automated database updating and consistency control in vector- structured spatial databases. To serve as a framework, a conceptual data model is formalized for the representation of geo-data from multiple map layers in which a map layer denotes a
TOPOLOGICALLY CONSISTENT MODELS FOR EFFICIENT BIG GEO-SPATIO-TEMPORAL DATA DISTRIBUTION
Directory of Open Access Journals (Sweden)
M. W. Jahn
2017-10-01
Full Text Available Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space and 4D (spatial + temporal space models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.
Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution
Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.
2017-10-01
Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.
Euser, Tanja; Winsemius, Hessel; Hrachowitz, Markus; Savenije, Hubert
2014-05-01
One of the main questions in hydrological modelling is how to design conceptual rainfall-runoff models with a higher degree of realism. It is expected that if models have a higher degree of realism, their predictive power will increase. One frequently discussed option is the use of more spatial information, which is increasingly available. Information with spatial variability can be found for example for forcing data, elevation, land use, etc. The abundance of spatially variable data requires the modeller to carefully select which data add realism to the model and which data only add complexity. An additional complication is further that the spatial detail required is a function of the time scales of the forcing data and the required output. The amount of spatially variable data available can guide the choice of an adequate distribution level of a model. As it is often difficult to determine the most suitable level of distribution for a certain catchment, this study systematically evaluates the value of incorporating distributed forcing data and distributed model structures in a stepwise approach for the Ourthe catchment (Belgium). The distribution of the model structures is based on landscape heterogeneity, using both elevation data and land use data. Eight different model configurations are tested: a lumped and a distributed model structure, each with lumped and stepwise distributed fluxes and stocks. To stepwise distribute the fluxes and stocks, the distributed forcing data is sequentially kept distributed for each reservoir of the model. To compare the degree of realism of the different configurations, both model performance and consistency are compared. Performance describes the ability of a model configuration to mimic a specific part of the hydrological behaviour in a specific catchment. Consistency describes the ability of a model configuration to adequately reproduce several hydrological signatures simultaneously. FARM (Framework to Assess the Realism of
DEFF Research Database (Denmark)
Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten
2016-01-01
brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group-level fMRI, with applications in modeling the relationships between network variability and behavioral/demographic variables.......The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize...... these functional relationships, although it is an ongoing challenge to develop robust, interpretable models for high-dimensional fMRI data. Gaussian mixture models (GMMs) are a powerful tool for parcellating the brain, based on the similarity of voxel time series. However, conventional GMMs have limited parametric...
A Fully Consistent Hidden Semi-Markov Model-Based Speech Recognition System
Oura, Keiichiro; Zen, Heiga; Nankaku, Yoshihiko; Lee, Akinobu; Tokuda, Keiichi
In a hidden Markov model (HMM), state duration probabilities decrease exponentially with time, which fails to adequately represent the temporal structure of speech. One of the solutions to this problem is integrating state duration probability distributions explicitly into the HMM. This form is known as a hidden semi-Markov model (HSMM). However, though a number of attempts to use HSMMs in speech recognition systems have been proposed, they are not consistent because various approximations were used in both training and decoding. By avoiding these approximations using a generalized forward-backward algorithm, a context-dependent duration modeling technique and weighted finite-state transducers (WFSTs), we construct a fully consistent HSMM-based speech recognition system. In a speaker-dependent continuous speech recognition experiment, our system achieved about 9.1% relative error reduction over the corresponding HMM-based system.
The fundamental solution for a consistent complex model of the shallow shell equations
Directory of Open Access Journals (Sweden)
Matthew P. Coleman
1999-09-01
Full Text Available The calculation of the Fourier transforms of the fundamental solution in shallow shell theory ostensibly was accomplished by J. L. Sanders [J. Appl. Mech. 37 (1970, 361-366]. However, as is shown in detail in this paper, the complex model used by Sanders is, in fact, inconsistent. This paper provides a consistent version of Sanders's complex model, along with the Fourier transforms of the fundamental solution for this corrected model. The inverse Fourier transforms are then calculated for the particular cases of the shallow spherical and circular cylindrical shells, and the results of the latter are seen to be in agreement with results appearing elsewhere in the literature.
Modelling, Synthesis, and Configuration of Networks-on-Chips
DEFF Research Database (Denmark)
Stuart, Matthias Bo
This thesis presents three contributions in two different areas of network-on-chip and system-on-chip research: Application modelling and identifying and solving different optimization problems related to two specific network-on-chip architectures. The contribution related to application modelling...... is an analytical method for deriving the worst-case traffic pattern caused by an application and the cache-coherence protocol in a cache-coherent shared-memory system. The contributions related to network-on-chip optimization problems consist of two parts: The development and evaluation of six heuristics...... for solving the network synthesis problem in the MANGO network-on-chip, and the identification and formalization of the ReNoC configuration problem together with three heuristics for solving it....
A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model
DEFF Research Database (Denmark)
Spann, Robert; Roca, Christophe; Kold, David
2017-01-01
mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... parameter estimation including an evaluation of the correlation and confidence intervals of those parameters to double-check identifiability issues. Such a consistent approach supports process modelling and understanding as i.e., one avoids questionable interpretations caused by estimates of actually...
Settings in Social Networks : a Measurement Model
Schweinberger, Michael; Snijders, Tom A.B.
2003-01-01
A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive
Settings in social networks : A measurement model
Schweinberger, M; Snijders, TAB
2003-01-01
A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive
Spinal Cord Injury Model System Information Network
... the UAB-SCIMS Contact the UAB-SCIMS UAB Spinal Cord Injury Model System Newly Injured Health Daily Living Consumer ... Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network ...
Radio Channel Modeling in Body Area Networks
An, L.; Bentum, Marinus Jan; Meijerink, Arjan; Scanlon, W.G.
2009-01-01
A body area network (BAN) is a network of bodyworn or implanted electronic devices, including wireless sensors which can monitor body parameters or to de- tect movements. One of the big challenges in BANs is the propagation channel modeling. Channel models can be used to understand wave propagation
Radio channel modeling in body area networks
An, L.; Bentum, Marinus Jan; Meijerink, Arjan; Scanlon, W.G.
2010-01-01
A body area network (BAN) is a network of bodyworn or implanted electronic devices, including wireless sensors which can monitor body parameters or to detect movements. One of the big challenges in BANs is the propagation channel modeling. Channel models can be used to understand wave propagation in
Network interconnections: an architectural reference model
Butscher, B.; Lenzini, L.; Morling, R.; Vissers, C.A.; Popescu-Zeletin, R.; van Sinderen, Marten J.; Heger, D.; Krueger, G.; Spaniol, O.; Zorn, W.
1985-01-01
One of the major problems in understanding the different approaches in interconnecting networks of different technologies is the lack of reference to a general model. The paper develops the rationales for a reference model of network interconnection and focuses on the architectural implications for
Self-Consistent Ring Current/Electromagnetic Ion Cyclotron Waves Modeling
Khazanov, G. V.; Gamayunov, K.; Gallagher, D.
2006-12-01
The self-consistent treatment of ring current (RC) ion dynamics and electromagnetic ion cyclotron (EMIC) waves, which are thought to exert important influences on dynamic ion evolution and are an important missing element in our understanding of the storm-and recovery-time ring current evolution. For example, the EMIC waves cause the RC decay on a time scale of about one hour or less during the main phase of storms. The oblique EMIC waves damp due to Landau resonance with the thermal plasmaspheric electrons, and subsequent transport of the dissipating wave energy into the ionosphere below causes an ionosphere temperature enhancement. Under certain conditions, relativistic electrons, with energies ~1 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is a critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMIC waves in global dynamic modeling of self-consistent RC - EMIC waves coupling. The results of our newly developed model will be presented, focusing mainly on the dynamics of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We will also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.
Consistent modeling of scalar mixing for presumed, multiple parameter probability density functions
Mortensen, Mikael
2005-01-01
In this Brief Communication we describe a consistent method for calculating the conditional scalar dissipation (or diffusion) rate for inhomogeneous turbulent flows. The model follows from the transport equation for the conserved scalar probability density function (PDF) using a gradient diffusion closure for the conditional mean velocity and a presumed PDF depending on any number of mixture fraction moments. With the presumed β PDF, the model is an inhomogeneous modification to the homogeneous model of Girimaji ["On the modeling of scalar diffusion in isotropic turbulence," Phys. Fluids A 4, 2529 (1992)]. An important feature of the model is that it makes the classical approach to the conditional moment closure completely conservative for inhomogeneous flows.
Performance modeling of network data services
Energy Technology Data Exchange (ETDEWEB)
Haynes, R.A.; Pierson, L.G.
1997-01-01
Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
Consistent spectral predictors for dynamic causal models of steady-state responses.
Moran, Rosalyn J; Stephan, Klaas E; Dolan, Raymond J; Friston, Karl J
2011-04-15
Dynamic causal modelling (DCM) for steady-state responses (SSR) is a framework for inferring the mechanisms that underlie observed electrophysiological spectra, using biologically plausible generative models of neuronal dynamics. In this paper, we examine the dynamic repertoires of nonlinear conductance-based neural population models and propose a generative model of their power spectra. Our model comprises an ensemble of interconnected excitatory and inhibitory cells, where synaptic currents are mediated by fast, glutamatergic and GABAergic receptors and slower voltage-gated NMDA receptors. We explore two formulations of how hidden neuronal states (depolarisation and conductances) interact: through their mean and variance (mean-field model) or through their mean alone (neural-mass model). Both rest on a nonlinear Fokker-Planck description of population dynamics, which can exhibit bifurcations (phase transitions). We first characterise these phase transitions numerically: by varying critical model parameters, we elicit both fixed points and quasiperiodic dynamics that reproduce the spectral characteristics (~2-100 Hz) of real electrophysiological data. We then introduce a predictor of spectral activity using centre manifold theory and linear stability analysis. This predictor is based on sampling the system's Jacobian over the orbits of hidden neuronal states. This predictor behaves consistently and smoothly in the region of phase transitions, which permits the use of gradient descent methods for model inversion. We demonstrate this by inverting generative models (DCMs) of SSRs, using simulated data that entails phase transitions. Copyright © 2011 Elsevier Inc. All rights reserved.
Learning Bayesian Network Model Structure from Data
National Research Council Canada - National Science Library
Margaritis, Dimitris
2003-01-01
In this thesis I address the important problem of the determination of the structure of directed statistical models, with the widely used class of Bayesian network models as a concrete vehicle of my ideas...
NC truck network model development research.
2008-09-01
This research develops a validated prototype truck traffic network model for North Carolina. The model : includes all counties and metropolitan areas of North Carolina and major economic areas throughout the : U.S. Geographic boundaries, population a...
Hybrid modeling and empirical analysis of automobile supply chain network
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Network models in economics and finance
Pardalos, Panos; Rassias, Themistocles
2014-01-01
Using network models to investigate the interconnectivity in modern economic systems allows researchers to better understand and explain some economic phenomena. This volume presents contributions by known experts and active researchers in economic and financial network modeling. Readers are provided with an understanding of the latest advances in network analysis as applied to economics, finance, corporate governance, and investments. Moreover, recent advances in market network analysis that focus on influential techniques for market graph analysis are also examined. Young researchers will find this volume particularly useful in facilitating their introduction to this new and fascinating field. Professionals in economics, financial management, various technologies, and network analysis, will find the network models presented in this book beneficial in analyzing the interconnectivity in modern economic systems.
Modelling the structure of complex networks
DEFF Research Database (Denmark)
Herlau, Tue
networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex......A complex network is a systems in which a discrete set of units interact in a quantifiable manner. Representing systems as complex networks have become increasingly popular in a variety of scientific fields including biology, social sciences and economics. Parallel to this development complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...
A Network Formation Model Based on Subgraphs
Chandrasekhar, Arun
2016-01-01
We develop a new class of random-graph models for the statistical estimation of network formation that allow for substantial correlation in links. Various subgraphs (e.g., links, triangles, cliques, stars) are generated and their union results in a network. We provide estimation techniques for recovering the rates at which the underlying subgraphs were formed. We illustrate the models via a series of applications including testing for incentives to form cross-caste relationships in rural India, testing to see whether network structure is used to enforce risk-sharing, testing as to whether networks change in response to a community's exposure to microcredit, and show that these models significantly outperform stochastic block models in matching observed network characteristics. We also establish asymptotic properties of the models and various estimators, which requires proving a new Central Limit Theorem for correlated random variables.
Directory of Open Access Journals (Sweden)
John (Jack P. Riegel III
2016-04-01
Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a
Gossip spread in social network Models
Johansson, Tobias
2017-04-01
Gossip almost inevitably arises in real social networks. In this article we investigate the relationship between the number of friends of a person and limits on how far gossip about that person can spread in the network. How far gossip travels in a network depends on two sets of factors: (a) factors determining gossip transmission from one person to the next and (b) factors determining network topology. For a simple model where gossip is spread among people who know the victim it is known that a standard scale-free network model produces a non-monotonic relationship between number of friends and expected relative spread of gossip, a pattern that is also observed in real networks (Lind et al., 2007). Here, we study gossip spread in two social network models (Toivonen et al., 2006; Vázquez, 2003) by exploring the parameter space of both models and fitting them to a real Facebook data set. Both models can produce the non-monotonic relationship of real networks more accurately than a standard scale-free model while also exhibiting more realistic variability in gossip spread. Of the two models, the one given in Vázquez (2003) best captures both the expected values and variability of gossip spread.
Synergistic effects in threshold models on networks
Juul, Jonas S.; Porter, Mason A.
2018-01-01
Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can—depending on a parameter—either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.
Optimized null model for protein structure networks.
Milenković, Tijana; Filippis, Ioannis; Lappe, Michael; Przulj, Natasa
2009-06-26
Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs) as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model for RIGs, by
Optimized null model for protein structure networks.
Directory of Open Access Journals (Sweden)
Tijana Milenković
Full Text Available Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model
Energy Technology Data Exchange (ETDEWEB)
Weimer-Jehle, Wolfgang; Wassermann, Sandra; Kosow, Hannah [Internationales Zentrum fuer Kultur- und Technikforschung an der Univ. Stuttgart (Germany). ZIRN Interdisziplinaerer Forschungsschwerpunkt Risiko und Nachhaltige Technikentwicklung
2011-04-15
Model-based environmental scenarios normally require multiple framework assumptions regarding future social, political and economic developments (external developments). In most cases these framework assumptions are highly uncertain. Furthermore, different external developments are not isolated from each other and their interdependences can be described by qualitative judgments only. If the internal consistency of framework assumptions is not methodologically addressed, environmental models risk to be based on inconsistent combinations of framework assumptions which do not reflect existing relations between the respective factors in an appropriate way. This report aims at demonstrating how consistent context scenarios can be developed with the help of the cross-impact balance analysis (CIB). This method allows not only for the internal consistency of framework assumptions of a single model but also for the overall consistency of framework assumptions of modeling instruments, supporting the integrated interpretation of the results of different models. In order to demonstrate the method, in a first step, ten common framework assumptions were chosen and their possible future developments until 2030 were described. In a second step, a qualitative impact network was developed based on expert elicitation. The impact network provided the basis for a qualitative but systematic analysis of the internal consistency of combinations of framework assumptions. This analysis was carried out with the CIB-method and resulted in a set of consistent context scenarios. These scenarios can be used as an informative background for defining framework assumptions for environmental models at the UBA. (orig.)
Modeling of axonal endoplasmic reticulum network by spastic paraplegia proteins.
Yalçın, Belgin; Zhao, Lu; Stofanko, Martin; O'Sullivan, Niamh C; Kang, Zi Han; Roost, Annika; Thomas, Matthew R; Zaessinger, Sophie; Blard, Olivier; Patto, Alex L; Sohail, Anood; Baena, Valentina; Terasaki, Mark; O'Kane, Cahir J
2017-07-25
Axons contain a smooth tubular endoplasmic reticulum (ER) network that is thought to be continuous with ER throughout the neuron; the mechanisms that form this axonal network are unknown. Mutations affecting reticulon or REEP proteins, with intramembrane hairpin domains that model ER membranes, cause an axon degenerative disease, hereditary spastic paraplegia (HSP). We show that Drosophila axons have a dynamic axonal ER network, which these proteins help to model. Loss of HSP hairpin proteins causes ER sheet expansion, partial loss of ER from distal motor axons, and occasional discontinuities in axonal ER. Ultrastructural analysis reveals an extensive ER network in axons, which shows larger and fewer tubules in larvae that lack reticulon and REEP proteins, consistent with loss of membrane curvature. Therefore HSP hairpin-containing proteins are required for shaping and continuity of axonal ER, thus suggesting roles for ER modeling in axon maintenance and function.
Joint Modelling of Structural and Functional Brain Networks
DEFF Research Database (Denmark)
Andersen, Kasper Winther; Herlau, Tue; Mørup, Morten
Functional and structural magnetic resonance imaging have become the most important noninvasive windows to the human brain. A major challenge in the analysis of brain networks is to establish the similarities and dissimilarities between functional and structural connectivity. We formulate a non......-parametric Bayesian network model which allows for joint modelling and integration of multiple networks. We demonstrate the model’s ability to detect vertices that share structure across networks jointly in functional MRI (fMRI) and diffusion MRI (dMRI) data. Using two fMRI and dMRI scans per subject, we establish...... significant structures that are consistently shared across subjects and data splits. This provides an unsupervised approach for modeling of structure-function relations in the brain and provides a general framework for multimodal integration....
Towards Reproducible Descriptions of Neuronal Network Models
Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard
2009-01-01
Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159
Towards reproducible descriptions of neuronal network models.
Directory of Open Access Journals (Sweden)
Eilen Nordlie
2009-08-01
Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.
Fast reconstruction of compact context-specific metabolic network models.
Directory of Open Access Journals (Sweden)
Nikos Vlassis
2014-01-01
Full Text Available Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue, and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms.
A Self-Consistent Model for Thermal Oxidation of Silicon at Low Oxide Thickness
Directory of Open Access Journals (Sweden)
Gerald Gerlach
2016-01-01
Full Text Available Thermal oxidation of silicon belongs to the most decisive steps in microelectronic fabrication because it allows creating electrically insulating areas which enclose electrically conductive devices and device areas, respectively. Deal and Grove developed the first model (DG-model for the thermal oxidation of silicon describing the oxide thickness versus oxidation time relationship with very good agreement for oxide thicknesses of more than 23 nm. Their approach named as general relationship is the basis of many similar investigations. However, measurement results show that the DG-model does not apply to very thin oxides in the range of a few nm. Additionally, it is inherently not self-consistent. The aim of this paper is to develop a self-consistent model that is based on the continuity equation instead of Fick’s law as the DG-model is. As literature data show, the relationship between silicon oxide thickness and oxidation time is governed—down to oxide thicknesses of just a few nm—by a power-of-time law. Given by the time-independent surface concentration of oxidants at the oxide surface, Fickian diffusion seems to be neglectable for oxidant migration. The oxidant flux has been revealed to be carried by non-Fickian flux processes depending on sites being able to lodge dopants (oxidants, the so-called DOCC-sites, as well as on the dopant jump rate.
SALT Spectropolarimetry and Self-Consistent SED and Polarization Modeling of Blazars
Böttcher, Markus; van Soelen, Brian; Britto, Richard; Buckley, David; Marais, Johannes; Schutte, Hester
2017-09-01
We report on recent results from a target-of-opportunity program to obtain spectropolarimetry observations with the Southern African Large Telescope (SALT) on flaring gamma-ray blazars. SALT spectropolarimetry and contemporaneous multi-wavelength spectral energy distribution (SED) data are being modelled self-consistently with a leptonic single-zone model. Such modeling provides an accurate estimate of the degree of order of the magnetic field in the emission region and the thermal contributions (from the host galaxy and the accretion disk) to the SED, thus putting strong constraints on the physical parameters of the gamma-ray emitting region. For the specific case of the $\\gamma$-ray blazar 4C+01.02, we demonstrate that the combined SED and spectropolarimetry modeling constrains the mass of the central black hole in this blazar to $M_{\\rm BH} \\sim 10^9 \\, M_{\\odot}$.
Characterization and Modeling of Network Traffic
DEFF Research Database (Denmark)
Shawky, Ahmed; Bergheim, Hans; Ragnarsson, Olafur
2011-01-01
This paper attempts to characterize and model backbone network traffic, using a small number of statistics. In order to reduce cost and processing power associated with traffic analysis. The parameters affecting the behaviour of network traffic are investigated and the choice is that inter......-arrival time, IP addresses, port numbers and transport protocol are the only necessary parameters to model network traffic behaviour. In order to recreate this behaviour, a complex model is needed which is able to recreate traffic behaviour based on a set of statistics calculated from the parameters values....... The model investigates the traffic generation mechanisms, and grouping traffic into flows and applications....
Self-consistent modeling of jet formation process in the nanosecond laser pulse regime
Mézel, C.; Hallo, L.; Souquet, A.; Breil, J.; Hébert, D.; Guillemot, F.
2009-12-01
Laser induced forward transfer (LIFT) is a direct printing technique. Because of its high application potential, interest continues to increase. LIFT is routinely used in printing, spray generation and thermal-spike sputtering. Biological material such as cells and proteins have already been transferred successfully for the creation of biological microarrays. Recently, modeling has been used to explain parts of the ejection transfer process. No global modeling strategy is currently available. In this paper, a hydrodynamic code is utilized to model the jet formation process and estimate the constraints obeyed by the bioelements during the transfer. A self-consistent model that includes laser energy absorption, plasma formation via ablation, and hydrodynamic processes is proposed and confirmed with experimental results. Fundamental physical mechanisms via one-dimensional modeling are presented. Two-dimensional (2D) simplified solutions of the jet formation model equations are proposed. Predicted results of the model are jet existence and its velocity. The 2D simulation results are in good agreement with a simple model presented by a previous investigator.
Modeling, Optimization & Control of Hydraulic Networks
DEFF Research Database (Denmark)
Tahavori, Maryamsadat
2014-01-01
in water network is pressure management. By reducing the pressure in the water network, the leakage can be reduced significantly. Also it reduces the amount of energy consumption in water networks. The primary purpose of this work is to develop control algorithms for pressure control in water supply....... The nonlinear network model is derived based on the circuit theory. A suitable projection is used to reduce the state vector and to express the model in standard state-space form. Then, the controllability of nonlinear nonaffine hydraulic networks is studied. The Lie algebra-based controllability matrix is used...... to solve nonlinear optimal control problems. In the water supply system model, the hydraulic resistance of the valve is estimated by real data and it is considered to be a disturbance. The disturbance in our system is updated every 24 hours based on the amount of water usage by consumers every day. Model...
Self-consistent atmosphere modeling with cloud formation for low-mass stars and exoplanets
Juncher, Diana; Jørgensen, Uffe G.; Helling, Christiane
2017-12-01
Context. Low-mass stars and extrasolar planets have ultra-cool atmospheres where a rich chemistry occurs and clouds form. The increasing amount of spectroscopic observations for extrasolar planets requires self-consistent model atmosphere simulations to consistently include the formation processes that determine cloud formation and their feedback onto the atmosphere. Aims: Our aim is to complement the MARCS model atmosphere suit with simulations applicable to low-mass stars and exoplanets in preparation of E-ELT, JWST, PLATO and other upcoming facilities. Methods: The MARCS code calculates stellar atmosphere models, providing self-consistent solutions of the radiative transfer and the atmospheric structure and chemistry. We combine MARCS with a kinetic model that describes cloud formation in ultra-cool atmospheres (seed formation, growth/evaporation, gravitational settling, convective mixing, element depletion). Results: We present a small grid of self-consistently calculated atmosphere models for Teff = 2000-3000 K with solar initial abundances and log (g) = 4.5. Cloud formation in stellar and sub-stellar atmospheres appears for Teff < 2700 K and has a significant effect on the structure and the spectrum of the atmosphere for Teff < 2400 K. We have compared the synthetic spectra of our models with observed spectra and found that they fit the spectra of mid- to late-type M-dwarfs and early-type L-dwarfs well. The geometrical extension of the atmospheres (at τ = 1) changes with wavelength resulting in a flux variation of 10%. This translates into a change in geometrical extension of the atmosphere of about 50 km, which is the quantitative basis for exoplanetary transit spectroscopy. We also test DRIFT-MARCS for an example exoplanet and demonstrate that our simulations reproduce the Spitzer observations for WASP-19b rather well for Teff = 2600 K, log (g) = 3.2 and solar abundances. Our model points at an exoplanet with a deep cloud-free atmosphere with a substantial
A network model of the interbank market
Li, Shouwei; He, Jianmin; Zhuang, Yaming
2010-12-01
This work introduces a network model of an interbank market based on interbank credit lending relationships. It generates some network features identified through empirical analysis. The critical issue to construct an interbank network is to decide the edges among banks, which is realized in this paper based on the interbank’s degree of trust. Through simulation analysis of the interbank network model, some typical structural features are identified in our interbank network, which are also proved to exist in real interbank networks. They are namely, a low clustering coefficient and a relatively short average path length, community structures, and a two-power-law distribution of out-degree and in-degree.
Model for Microcirculation Transportation Network Design
Directory of Open Access Journals (Sweden)
Qun Chen
2012-01-01
Full Text Available The idea of microcirculation transportation was proposed to shunt heavy traffic on arterial roads through branch roads. The optimization model for designing micro-circulation transportation network was developed to pick out branch roads as traffic-shunting channels and determine their required capacity, trying to minimize the total reconstruction expense and land occupancy subject to saturation and reconstruction space constraints, while accounting for the route choice behaviour of network users. Since micro-circulation transportation network design problem includes both discrete and continuous variables, a discretization method was developed to convert two groups of variables (discrete variables and continuous variables into one group of new discrete variables, transforming the mixed network design problem into a new kind of discrete network design problem with multiple values. The genetic algorithm was proposed to solve the new discrete network design problem. Finally a numerical example demonstrated the efficiency of the model and algorithm.
Directory of Open Access Journals (Sweden)
Diego Bellan
2016-06-01
Full Text Available This paper deals with a rigorous and mathematically consistent technique for circuit analysis of modern electrical power systems consisting in the interconnection of three-phase components and single-phase active loads. Indeed, it is well known that the standard technique based on the symmetrical components transformation is commonly used in the analysis of symmetrical threephase systems. Nowadays, however, the evolution of power systems towards the custom power conditioning (e.g., active filtering and the smart grid model requires the inclusion into the analytical tool of single-phase active loads. Starting from the symmetrical components transformation in its rational form instead of its classical form, a rigorous circuit representation of the interconnection of a three-phase system with single-phase active loads is derived in the paper. The proposed circuit representation allows the analysis of complex power systems by means of basic circuit techniques. In particular, the paper focuses on the evaluation of the zero-sequence component of the currents in any branch of the power system. The application of the proposed circuit technique is demonstrated through an example consisting in the analysis of an active filter designed to force to zero the current in the fourth wire of the mains.
Self-consistent generation of continental crust in global mantle convection models
Jain, Charitra; Rozel, Antoine; Tackley, Paul
2017-04-01
Numerical modeling commonly shows that mantle convection and continents have strong feedbacks on each other (Philips and Coltice, JGR 2010; Heron and Lowman, JGR 2014), but the continents are always inserted a priori while basaltic (oceanic) crust is generated self-consistently in such models (Rolf et al., EPSL 2012). We aim to implement self-consistent generation of continental crust in global models of mantle convection using StagYY (Tackley, PEPI 2008). The silica-rich continental crust appears to have been formed by fractional melting and crystallization in episodes of relatively rapid growth from late Archean to late Proterozoic eras (3-1 Ga) (Hawkesworth & Kemp, Nature 2006). It takes several stages of differentiation to generate continental crust. First, the basaltic magma is extracted from the pyrolitic mantle. Second, it goes through eclogitic transformation and then partially melts to form Na-rich Tonalite-Trondhjemite-Granodiorite (TTG) which rise to form proto-continents (Rudnick, Nature 1995; Herzberg & Rudnick, Lithos 2012). TTGs dominate the grey gneiss complexes which make up most of the continental crust. Based on the melting conditions proposed by Moyen (Lithos, 2011), we parameterize TTG formation and henceforth, the continental crust. Continental crust can also be destroyed by subduction or delamination. We will investigate continental growth and destruction history in the models spanning the age of the Earth.
Becerra, Marley; Frid, Henrik; Vázquez, Pedro A.
2017-12-01
This paper presents a self-consistent model of electrohydrodynamic (EHD) laminar plumes produced by electron injection from ultra-sharp needle tips in cyclohexane. Since the density of electrons injected into the liquid is well described by the Fowler-Nordheim field emission theory, the injection law is not assumed. Furthermore, the generation of electrons in cyclohexane and their conversion into negative ions is included in the analysis. Detailed steady-state characteristics of EHD plumes under weak injection and space-charge limited injection are studied. It is found that the plume characteristics far from both electrodes and under weak injection can be accurately described with an asymptotic simplified solution proposed by Vazquez et al. ["Dynamics of electrohydrodynamic laminar plumes: Scaling analysis and integral model," Phys. Fluids 12, 2809 (2000)] when the correct longitudinal electric field distribution and liquid velocity radial profile are used as input. However, this asymptotic solution deviates from the self-consistently calculated plume parameters under space-charge limited injection since it neglects the radial variations of the electric field produced by a high-density charged core. In addition, no significant differences in the model estimates of the plume are found when the simulations are obtained either with the finite element method or with a diffusion-free particle method. It is shown that the model also enables the calculation of the current-voltage characteristic of EHD laminar plumes produced by electron field emission, with good agreement with measured values reported in the literature.
A Self-Consistent Plasma-Sheath Model for the Inductively Coupled Plasma Reactor
Bose, Deepak; Govindam, T. R.; Meyyappan, M.
2000-01-01
Accurate determination of ion flux on a wafer requires a self-consistent, multidimensional modeling of plasma reactor that adequately resolves the sheath region adjoining the wafer. This level of modeling is difficult to achieve since non-collisional sheath lengths are usually 3-4 orders of magnitude smaller than the reactor scale. Also, the drift-diffusion equations used for ion transport becomes invalid in the sheath since the ion frictional force is no longer in equilibrium with drift and diffusion forces. The alternative is to use a full momentum equation for each ionic species. In this work we will present results from a self-consistent reactor scale-sheath scale model for 2D inductively coupled plasmas. The goal of this study is to improve the modeling capabilities and assess the importance of additional physics in determining important reactor performance features, such as the ion flux uniformity, coil frequency and configuration effects, etc. Effect of numerical dissipation on the solution quality will also be discussed.
Modelling of virtual production networks
Directory of Open Access Journals (Sweden)
2011-03-01
Full Text Available Nowadays many companies, especially small and medium-sized enterprises (SMEs, specialize in a limited field of production. It requires forming virtual production networks of cooperating enterprises to manufacture better, faster and cheaper. Apart from that, some production orders cannot be realized, because there is not a company of sufficient production potential. In this case the virtual production networks of cooperating companies can realize these production orders. These networks have larger production capacity and many different resources. Therefore it can realize many more production orders together than each of them separately. Such organization allows for executing high quality product. The maintenance costs of production capacity and used resources are not so high. In this paper a methodology of rapid prototyping of virtual production networks is proposed. It allows to execute production orders on time considered existing logistic constraints.
Modeling Epidemics Spreading on Social Contact Networks.
Zhang, Zhaoyang; Wang, Honggang; Wang, Chonggang; Fang, Hua
2015-09-01
Social contact networks and the way people interact with each other are the key factors that impact on epidemics spreading. However, it is challenging to model the behavior of epidemics based on social contact networks due to their high dynamics. Traditional models such as susceptible-infected-recovered (SIR) model ignore the crowding or protection effect and thus has some unrealistic assumption. In this paper, we consider the crowding or protection effect and develop a novel model called improved SIR model. Then, we use both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks. The results from both simulations and real data set conclude that the epidemics are more likely to outbreak on social contact networks with higher average degree. We also present some potential immunization strategies, such as random set immunization, dominating set immunization, and high degree set immunization to further prove the conclusion.
Directory of Open Access Journals (Sweden)
Liyan Zhang
2017-01-01
Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.
Silvis, Maurits H; Verstappen, Roel
2016-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...
A hydrodynamically-consistent MRT lattice Boltzmann model on a 2D rectangular grid
Peng, Cheng; Min, Haoda; Guo, Zhaoli; Wang, Lian-Ping
2016-12-01
A multiple-relaxation time (MRT) lattice Boltzmann (LB) model on a D2Q9 rectangular grid is designed theoretically and validated numerically in the present work. By introducing stress components into the equilibrium moments, this MRT-LB model restores the isotropy of diffusive momentum transport at the macroscopic level (or in the continuum limit), leading to moment equations that are fully consistent with the Navier-Stokes equations. The model is derived by an inverse design process which is described in detail. Except one moment associated with the energy square, all other eight equilibrium moments can be theoretically and uniquely determined. The model is then carefully validated using both the two-dimensional decaying Taylor-Green vortex flow and lid-driven cavity flow, with different grid aspect ratios. The corresponding results from an earlier model (Bouzidi et al. (2001) [28]) are also presented for comparison. The results of Bouzidi et al.'s model show problems associated with anisotropy of viscosity coefficients, while the present model exhibits full isotropy and is accurate and stable.
Random graph models for dynamic networks
Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.
2017-10-01
Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.
Modeling the interdependent network based on two-mode networks
An, Feng; Gao, Xiangyun; Guan, Jianhe; Huang, Shupei; Liu, Qian
2017-10-01
Among heterogeneous networks, there exist obviously and closely interdependent linkages. Unlike existing research primarily focus on the theoretical research of physical interdependent network model. We propose a two-layer interdependent network model based on two-mode networks to explore the interdependent features in the reality. Specifically, we construct a two-layer interdependent loan network and develop several dependent features indices. The model is verified to enable us to capture the loan dependent features of listed companies based on loan behaviors and shared shareholders. Taking Chinese debit and credit market as case study, the main conclusions are: (1) only few listed companies shoulder the main capital transmission (20% listed companies occupy almost 70% dependent degree). (2) The control of these key listed companies will be more effective of avoiding the spreading of financial risks. (3) Identifying the companies with high betweenness centrality and controlling them could be helpful to monitor the financial risk spreading. (4) The capital transmission channel among Chinese financial listed companies and Chinese non-financial listed companies are relatively strong. However, under greater pressure of demand of capital transmission (70% edges failed), the transmission channel, which constructed by debit and credit behavior, will eventually collapse.
Modeling and simulation of the USAVRE network and radiology operations
Martinez, Ralph; Bradford, Daniel Q.; Hatch, Jay; Sochan, John; Chimiak, William J.
1998-07-01
The U.S. Army Medical Command, lead by the Brooke Army Medical Center, has embarked on a visionary project. The U.S. Army Virtual Radiology Environment (USAVRE) is a CONUS-based network that connects all the Army's major medical centers and Regional Medical Commands (RMC). The purpose of the USAVRE is to improve the quality, access, and cost of radiology services in the Army via the use of state-of-the-art medical imaging, computer, and networking technologies. The USAVRE contains multimedia viewing workstations; database archive systems are based on a distributed computing environment using Common Object Request Broker Architecture (CORBA) middleware protocols. The underlying telecommunications network is an ATM-based backbone network that connects the RMC regional networks and PACS networks at medical centers and RMC clinics. This project is a collaborative effort between Army, university, and industry centers with expertise in teleradiology and Global PACS applications. This paper describes a model and simulation of the USAVRE for performance evaluation purposes. As a first step the results of a Technology Assessment and Requirements Analysis (TARA) -- an analysis of the workload in Army radiology departments, their equipment and their staffing. Using the TARA data and other workload information, we have developed a very detailed analysis of the workload and workflow patterns of our Medical Treatment Facilities. We are embarking on modeling and simulation strategies, which will form the foundation for the VRE network. The workload analysis is performed for each radiology modality in a RMC site. The workload consists of the number of examinations per modality, type of images per exam, number of images per exam, and size of images. The frequency for store and forward cases, second readings, and interactive consultation cases are also determined. These parameters are translated into the model described below. The model for the USAVRE is hierarchical in nature
nIFTy cosmology: the clustering consistency of galaxy formation models
Pujol, Arnau; Skibba, Ramin A.; Gaztañaga, Enrique; Benson, Andrew; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofia A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; De Lucia, Gabriella; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Garcia-Bellido, Juan; Gargiulo, Ignacio D.; Gonzalez-Perez, Violeta; Helly, John; Henriques, Bruno M. B.; Hirschmann, Michaela; Knebe, Alexander; Lee, Jaehyun; Mamon, Gary A.; Monaco, Pierluigi; Onions, Julian; Padilla, Nelson D.; Pearce, Frazer R.; Power, Chris; Somerville, Rachel S.; Srisawat, Chaichalit; Thomas, Peter A.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.
2017-07-01
We present a clustering comparison of 12 galaxy formation models [including semi-analytic models (SAMs) and halo occupation distribution (HOD) models] all run on halo catalogues and merger trees extracted from a single Λ cold dark matter N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the two-point correlation functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 h-1 Mpc. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.
Bayesian network models for error detection in radiotherapy plans.
Kalet, Alan M; Gennari, John H; Ford, Eric C; Phillips, Mark H
2015-04-07
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network's conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
An endogenous model of the credit network
He, Jianmin; Sui, Xin; Li, Shouwei
2016-01-01
In this paper, an endogenous credit network model of firm-bank agents is constructed. The model describes the endogenous formation of firm-firm, firm-bank and bank-bank credit relationships. By means of simulations, the model is capable of showing some obvious similarities with empirical evidence found by other scholars: the upper-tail of firm size distribution can be well fitted with a power-law; the bank size distribution can be lognormally distributed with a power-law tail; the bank in-degrees of the interbank credit network as well as the firm-bank credit network fall into two-power-law distributions.
Tensor network models of multiboundary wormholes
Peach, Alex; Ross, Simon F.
2017-05-01
We study the entanglement structure of states dual to multiboundary wormhole geometries using tensor network models. Perfect and random tensor networks tiling the hyperbolic plane have been shown to provide good models of the entanglement structure in holography. We extend this by quotienting the plane by discrete isometries to obtain models of the multiboundary states. We show that there are networks where the entanglement structure is purely bipartite, extending results obtained in the large temperature limit. We analyse the entanglement structure in a range of examples.
Stochastic discrete model of karstic networks
Jaquet, O.; Siegel, P.; Klubertanz, G.; Benabderrhamane, H.
Karst aquifers are characterised by an extreme spatial heterogeneity that strongly influences their hydraulic behaviour and the transport of pollutants. These aquifers are particularly vulnerable to contamination because of their highly permeable networks of conduits. A stochastic model is proposed for the simulation of the geometry of karstic networks at a regional scale. The model integrates the relevant physical processes governing the formation of karstic networks. The discrete simulation of karstic networks is performed with a modified lattice-gas cellular automaton for a representative description of the karstic aquifer geometry. Consequently, more reliable modelling results can be obtained for the management and the protection of karst aquifers. The stochastic model was applied jointly with groundwater modelling techniques to a regional karst aquifer in France for the purpose of resolving surface pollution issues.
Designing Network-based Business Model Ontology
DEFF Research Database (Denmark)
Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz
2015-01-01
Survival on dynamic environment is not achieved without a map. Scanning and monitoring of the market show business models as a fruitful tool. But scholars believe that old-fashioned business models are dead; as they are not included the effect of internet and network in themselves. This paper...... is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....
Queueing Models for Mobile Ad Hoc Networks
de Haan, Roland
2009-01-01
This thesis presents models for the performance analysis of a recent communication paradigm: \\emph{mobile ad hoc networking}. The objective of mobile ad hoc networking is to provide wireless connectivity between stations in a highly dynamic environment. These dynamics are driven by the mobility of
Modelling traffic congestion using queuing networks
Indian Academy of Sciences (India)
Traffic Flow-Density diagrams are obtained using simple Jackson queuing network analysis. Such simple analytical models can be used to capture the effect of non- homogenous traffic. Keywords. Flow-density curves; uninterrupted traffic; Jackson networks. 1. Introduction. Traffic management has become very essential in ...
A Globally Consistent Methodology for an Exposure Model for Natural Catastrophe Risk Assessment
Gunasekera, Rashmin; Ishizawa, Oscar; Pandey, Bishwa; Saito, Keiko
2013-04-01
There is a high demand for the development of a globally consistent and robust exposure data model employing a top down approach, to be used in national level catastrophic risk profiling for the public sector liability. To this effect, there are currently several initiatives such as UN-ISDR Global Assessment Report (GAR) and Global Exposure Database for Global Earthquake Model (GED4GEM). However, the consistency and granularity differs from region to region, a problem that is overcome in this proposed approach using national datasets for example in Latin America and the Caribbean Region (LCR). The methodology proposed in this paper aim to produce a global open exposure dataset based upon population, country specific building type distribution and other global/economic indicators such as World Bank indices that are suitable for natural catastrophe risk modelling purposes. The output would be a GIS raster grid at approximately 1 km spatial resolution which would highlight urbaness (building typology distribution, occupancy and use) for each cell at sub national level and compatible with other global initiatives and datasets. It would make use of datasets on population, census, demographic, building data and land use/land cover which are largely available in the public domain. The resultant exposure dataset could be used in conjunction with hazard and vulnerability components to create views of risk for multiple hazards that include earthquake, flood and windstorms. The model we hope would also assist in steps towards future initiatives for open, interchangeable and compatible databases for catastrophe risk modelling. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
Quest for consistent modelling of statistical decay of the compound nucleus
Banerjee, Tathagata; Nath, S.; Pal, Santanu
2018-01-01
A statistical model description of heavy ion induced fusion-fission reactions is presented where shell effects, collective enhancement of level density, tilting away effect of compound nuclear spin and dissipation are included. It is shown that the inclusion of all these effects provides a consistent picture of fission where fission hindrance is required to explain the experimental values of both pre-scission neutron multiplicities and evaporation residue cross-sections in contrast to some of the earlier works where a fission hindrance is required for pre-scission neutrons but a fission enhancement for evaporation residue cross-sections.
Consistent Treatment of Hydrophobicity in Protein Lattice Models Accounts for Cold Denaturation
van Dijk, Erik; Varilly, Patrick; Knowles, Tuomas P. J.; Frenkel, Daan; Abeln, Sanne
2016-02-01
The hydrophobic effect stabilizes the native structure of proteins by minimizing the unfavorable interactions between hydrophobic residues and water through the formation of a hydrophobic core. Here, we include the entropic and enthalpic contributions of the hydrophobic effect explicitly in an implicit solvent model. This allows us to capture two important effects: a length-scale dependence and a temperature dependence for the solvation of a hydrophobic particle. This consistent treatment of the hydrophobic effect explains cold denaturation and heat capacity measurements of solvated proteins.
2017-09-05
A META-CAVITY MODEL CONSISTING OF MULTILAYER METASURFACES (POSTPRINT) 5a. CONTRACT NUMBER FA8650-11-D-5801-0013 5b. GRANT NUMBER 5c ...cavity modes, which is confirmed by observing the x-component of electric field Ex inside the MPA structure. The right-side figures in Fig. 5c –e...thickness (Ex as shown on the right side of Fig. 5c ) as shown in Fig. 5d, which results in forming the second-order FP cavity mode. Similarly, Ex for µ
Mathematical model of highways network optimization
Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.
2017-12-01
The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.
Liu, Zugang
Network systems, including transportation and logistic systems, electric power generation and distribution networks as well as financial networks, provide the critical infrastructure for the functioning of our societies and economies. The understanding of the dynamic behavior of such systems is also crucial to national security and prosperity. The identification of new connections between distinct network systems is the inspiration for the research in this dissertation. In particular, I answer two questions raised by Beckmann, McGuire, and Winsten (1956) and Copeland (1952) over half a century ago, which are, respectively, how are electric power flows related to transportation flows and does money flow like water or electricity? In addition, in this dissertation, I achieve the following: (1) I establish the relationships between transportation networks and three other classes of complex network systems: supply chain networks, electric power generation and transmission networks, and financial networks with intermediation. The establishment of such connections provides novel theoretical insights as well as new pricing mechanisms, and efficient computational methods. (2) I develop new modeling frameworks based on evolutionary variational inequality theory that capture the dynamics of such network systems in terms of the time-varying flows and incurred costs, prices, and, where applicable, profits. This dissertation studies the dynamics of such network systems by addressing both internal competition and/or cooperation, and external changes, such as varying costs and demands. (3) I focus, in depth, on electric power supply chains. By exploiting the relationships between transportation networks and electric power supply chains, I develop a large-scale network model that integrates electric power supply chains and fuel supply markets. The model captures both the economic transactions as well as the physical transmission constraints. The model is then applied to the New
Modeling trust context in networks
Adali, Sibel
2013-01-01
We make complex decisions every day, requiring trust in many different entities for different reasons. These decisions are not made by combining many isolated trust evaluations. Many interlocking factors play a role, each dynamically impacting the others.? In this brief, 'trust context' is defined as the system level description of how the trust evaluation process unfolds.Networks today are part of almost all human activity, supporting and shaping it. Applications increasingly incorporate new interdependencies and new trust contexts. Social networks connect people and organizations throughout
Model-based control of networked systems
Garcia, Eloy; Montestruque, Luis A
2014-01-01
This monograph introduces a class of networked control systems (NCS) called model-based networked control systems (MB-NCS) and presents various architectures and control strategies designed to improve the performance of NCS. The overall performance of NCS considers the appropriate use of network resources, particularly network bandwidth, in conjunction with the desired response of the system being controlled. The book begins with a detailed description of the basic MB-NCS architecture that provides stability conditions in terms of state feedback updates . It also covers typical problems in NCS such as network delays, network scheduling, and data quantization, as well as more general control problems such as output feedback control, nonlinear systems stabilization, and tracking control. Key features and topics include: Time-triggered and event-triggered feedback updates Stabilization of uncertain systems subject to time delays, quantization, and extended absence of feedback Optimal control analysis and ...
Complex networks repair strategies: Dynamic models
Fu, Chaoqi; Wang, Ying; Gao, Yangjun; Wang, Xiaoyang
2017-09-01
Network repair strategies are tactical methods that restore the efficiency of damaged networks; however, unreasonable repair strategies not only waste resources, they are also ineffective for network recovery. Most extant research on network repair focuses on static networks, but results and findings on static networks cannot be applied to evolutionary dynamic networks because, in dynamic models, complex network repair has completely different characteristics. For instance, repaired nodes face more severe challenges, and require strategic repair methods in order to have a significant effect. In this study, we propose the Shell Repair Strategy (SRS) to minimize the risk of secondary node failures due to the cascading effect. Our proposed method includes the identification of a set of vital nodes that have a significant impact on network repair and defense. Our identification of these vital nodes reduces the number of switching nodes that face the risk of secondary failures during the dynamic repair process. This is positively correlated with the size of the average degree 〈 k 〉 and enhances network invulnerability.
Consistent phase-change modeling for CO2-based heat mining operation
DEFF Research Database (Denmark)
Singh, Ashok Kumar; Veje, Christian
2017-01-01
The accuracy of mathematical modeling of phase-change phenomena is limited if a simple, less accurate equation of state completes the governing partial differential equation. However, fluid properties (such as density, dynamic viscosity and compressibility) and saturation state are calculated using......–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper......, using temporal and spatial variations in pressure and fluid phase temperature, the energy capacity and how it is affected by fluid compression (Joule–Thomson effect) and convection was predicted. Results suggest that super-heated vapor can be produced at a higher rate with elevated heat content...
Impact of in-consistency between the climate model and its initial conditions on climate prediction
Liu, Xueyuan; Köhl, Armin; Stammer, Detlef; Masuda, Shuhei; Ishikawa, Yoichi; Mochizuki, Takashi
2017-08-01
We investigated the influence of dynamical in-consistency of initial conditions on the predictive skill of decadal climate predictions. The investigation builds on the fully coupled global model "Coupled GCM for Earth Simulator" (CFES). In two separate experiments, the ocean component of the coupled model is full-field initialized with two different initial fields from either the same coupled model CFES or the GECCO2 Ocean Synthesis while the atmosphere is initialized from CFES in both cases. Differences between both experiments show that higher SST forecast skill is obtained when initializing with coupled data assimilation initial conditions (CIH) instead of those from GECCO2 (GIH), with the most significant difference in skill obtained over the tropical Pacific at lead year one. High predictive skill of SST over the tropical Pacific seen in CIH reflects the good reproduction of El Niño events at lead year one. In contrast, GIH produces additional erroneous El Niño events. The tropical Pacific skill differences between both runs can be rationalized in terms of the zonal momentum balance between the wind stress and pressure gradient force, which characterizes the upper equatorial Pacific. In GIH, the differences between the oceanic and atmospheric state at initial time leads to imbalance between the zonal wind stress and pressure gradient force over the equatorial Pacific, which leads to the additional pseudo El Niño events and explains reduced predictive skill. The balance can be reestablished if anomaly initialization strategy is applied with GECCO2 initial conditions and improved predictive skill in the tropical Pacific is observed at lead year one. However, initializing the coupled model with self-consistent initial conditions leads to the highest skill of climate prediction in the tropical Pacific by preserving the momentum balance between zonal wind stress and pressure gradient force along the equatorial Pacific.
Modeling Network Traffic in Wavelet Domain
Directory of Open Access Journals (Sweden)
Sheng Ma
2004-12-01
Full Text Available This work discovers that although network traffic has the complicated short- and long-range temporal dependence, the corresponding wavelet coefficients are no longer long-range dependent. Therefore, a "short-range" dependent process can be used to model network traffic in the wavelet domain. Both independent and Markov models are investigated. Theoretical analysis shows that the independent wavelet model is sufficiently accurate in terms of the buffer overflow probability for Fractional Gaussian Noise traffic. Any model, which captures additional correlations in the wavelet domain, only improves the performance marginally. The independent wavelet model is then used as a unified approach to model network traffic including VBR MPEG video and Ethernet data. The computational complexity is O(N for developing such wavelet models and generating synthesized traffic of length N, which is among the lowest attained.
Gene Regulation Networks for Modeling Drosophila Development
Mjolsness, E.
1999-01-01
This chapter will very briefly introduce and review some computational experiments in using trainable gene regulation network models to simulate and understand selected episodes in the development of the fruit fly, Drosophila Melanogaster.
Graphical Model Theory for Wireless Sensor Networks
Energy Technology Data Exchange (ETDEWEB)
Davis, William B.
2002-12-08
Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.
Mitigating risk during strategic supply network modeling
Müssigmann, Nikolaus
2006-01-01
Mitigating risk during strategic supply network modeling. - In: Managing risks in supply chains / ed. by Wolfgang Kersten ... - Berlin : Schmidt, 2006. - S. 213-226. - (Operations and technology management ; 1)
Becker, S.; Losch, M.; Brockmann, J. M.; Freiwald, G.; Schuh, W.-D.
2014-11-01
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography—the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie 2012). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the
Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models
Vignal, Philippe
2016-02-11
Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are
Road maintenance planning using network flow modelling
Yang, Chao; Remenyte-Prescott, Rasa; Andrews, John
2015-01-01
This paper presents a road maintenance planning model that can be used to balance out maintenance cost and road user cost, since performing road maintenance at night can be convenient for road users but costly for highway agency. Based on the platform of the network traffic flow modelling, the traffic through the worksite and its adjacent road links is evaluated. Thus, maintenance arrangements at a worksite can be optimized considering the overall network performance. In addition, genetic alg...
A self-consistent model for estimating the critical current of superconducting devices
Zermeño, V.; Sirois, F.; Takayasu, M.; Vojenciak, M.; Kario, A.; Grilli, F.
2015-08-01
Nowadays, there is growing interest in using superconducting wires or tapes for the design and manufacture of devices such as cables, coils, rotating machinery, transformers, and fault current limiters, among others. Their high current capacity has made them the candidates of choice for manufacturing compact and light cables and coils that can be used in the large-scale power applications described above. However, the performance of these cables and coils is limited by their critical current, which is determined by several factors, including the conductor’s material properties and the geometric layout of the device itself. In this work we present a self-consistent model for estimating the critical current of superconducting devices. This is of large importance when the operating conditions are such that the self-field produced by the current is a significant fraction of the total field. The model is based on the asymptotic limit when time approaches infinity of Faraday’s equation written in terms of the magnetic vector potential. It uses a continuous E-J relationship and takes the angular dependence of the critical current density on the magnetic flux density into account. The proposed model is used to estimate the critical current of superconducting devices such as cables, coils, and coils made of transposed cables with very high accuracy. The high computing speed of this model makes it an ideal candidate for design optimization.
Additive Dose Response Models: Explicit Formulation and the Loewe Additivity Consistency Condition
Directory of Open Access Journals (Sweden)
Simone Lederer
2018-02-01
Full Text Available High-throughput techniques allow for massive screening of drug combinations. To find combinations that exhibit an interaction effect, one filters for promising compound combinations by comparing to a response without interaction. A common principle for no interaction is Loewe Additivity which is based on the assumption that no compound interacts with itself and that two doses from different compounds having the same effect are equivalent. It then should not matter whether a component is replaced by the other or vice versa. We call this assumption the Loewe Additivity Consistency Condition (LACC. We derive explicit and implicit null reference models from the Loewe Additivity principle that are equivalent when the LACC holds. Of these two formulations, the implicit formulation is the known General Isobole Equation (Loewe, 1928, whereas the explicit one is the novel contribution. The LACC is violated in a significant number of cases. In this scenario the models make different predictions. We analyze two data sets of drug screening that are non-interactive (Cokol et al., 2011; Yadav et al., 2015 and show that the LACC is mostly violated and Loewe Additivity not defined. Further, we compare the measurements of the non-interactive cases of both data sets to the theoretical null reference models in terms of bias and mean squared error. We demonstrate that the explicit formulation of the null reference model leads to smaller mean squared errors than the implicit one and is much faster to compute.
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Voigt, Reuss, Hill, and self-consistent techniques for modeling ultrasonic scattering
Kube, Christopher M.; Turner, Joseph A.
2015-03-01
An elastic wave propagating in a metal loses a portion of its energy from scattering caused by acoustic impedance differences existing at the boundaries of anisotropic grains. Theoretical scattering models capture this phenomena by assuming the incoming wave is described by an average elastic moduli tensor Cijkl0(x) that is perturbed by a grain with elasticity Cijkl(x') where the scattering event occurs when x = x'. Previous models have assumed that Cijkl0(x) is the Voigt average of the single-crystal elastic moduli tensor. However, this assumption may be incorrect because the Voigt average overestimates the wave's phase velocity. Thus, the use of alternate definitions of Cijkl0(x) to describe the incoming wave is posed. Voigt, Reuss, Hill, and self-consistent definitions of Cijkl0(x) are derived in the context of ultrasonic scattering models. The scattering-based models describing ultrasonic backscatter, attenuation, and diffusion are shown to be highly dependent on the definition of Cijkl0(x) .
A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints
Directory of Open Access Journals (Sweden)
L. Kantha
2016-01-01
Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.
Bayesian network models for error detection in radiotherapy plans
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Bibliographic Relationships in MARC and Consistent with FRBR Model According to RDA Rules
Directory of Open Access Journals (Sweden)
Mahsa Fardehoseiny
2013-03-01
Full Text Available This study was conducted to investigate the bibliographic relationships in the MARC and it’s consistency with the FRBR model. With establishing the necessary relations between bibliographic records, users will retrieve their necessary information faster and more easily. It is important to make a good communication in existing bibliographic records to help users to find what they need. This study’s purpose was to define the relationships between bibliographic records in the National Library's OPAC database and the study’s method was descriptive content analysis approach. In this study, the online catalog (OPAC National Library of Iran has been used to collect information. All records with the mentioned criteria listed in the final report of the IFLA bibliographic relations about the first group entities in FRBR model and RDA rules has been implemented and analyzed. According to this study, if software has been developed in which the data transferring was based on the conceptual model and the MARC’s data that already exists in the National Library's bibliographic database, these relationships will not be transferable. Withal, in this study the relationships on consistent FRBR and MARC concluded with an intelligent mind and the machine is unable to detect them. The results of this study showed that the relations which conveyed from MARC to FRBR, was about 47/70 percent of the MARC fields, in other hand by FRBR to MARC with the use of all intelligent efforts, and diagnosis of MARC relationships, only 31/38 percent of the relations can be covered through the MARC. But based on real data and usable fields in Boostan-e-Saadi with MARC pattern, records on the National Library of Iran showed that the results reduced to 16/95 percent..
Model and simulation of Krause model in dynamic open network
Zhu, Meixia; Xie, Guangqiang
2017-08-01
The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.
Batt, Gregory; Page, Michel; Cantone, Irene; Goessler, Gregor; Monteiro, Pedro; de Jong, Hidde
2010-09-15
Investigating the relation between the structure and behavior of complex biological networks often involves posing the question if the hypothesized structure of a regulatory network is consistent with the observed behavior, or if a proposed structure can generate a desired behavior. The above questions can be cast into a parameter search problem for qualitative models of regulatory networks. We develop a method based on symbolic model checking that avoids enumerating all possible parametrizations, and show that this method performs well on real biological problems, using the IRMA synthetic network and benchmark datasets. We test the consistency between IRMA and time-series expression profiles, and search for parameter modifications that would make the external control of the system behavior more robust. GNA and the IRMA model are available at http://ibis.inrialpes.fr/.
Posterior Predictive Model Checking in Bayesian Networks
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Energy Technology Data Exchange (ETDEWEB)
Allgood, G.O.; Dress, W.B.; Kercel, S.W.
1999-05-10
A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used as a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [
A simple model for studying interacting networks
Liu, Wenjia; Jolad, Shivakumar; Schmittmann, Beate; Zia, R. K. P.
2011-03-01
Many specific physical networks (e.g., internet, power grid, interstates), have been characterized in considerable detail, but in isolation from each other. Yet, each of these networks supports the functions of the others, and so far, little is known about how their interactions affect their structure and functionality. To address this issue, we consider two coupled model networks. Each network is relatively simple, with a fixed set of nodes, but dynamically generated set of links which has a preferred degree, κ . In the stationary state, the degree distribution has exponential tails (far from κ), an attribute which we can explain. Next, we consider two such networks with different κ 's, reminiscent of two social groups, e.g., extroverts and introverts. Finally, we let these networks interact by establishing a controllable fraction of cross links. The resulting distribution of links, both within and across the two model networks, is investigated and discussed, along with some potential consequences for real networks. Supported in part by NSF-DMR-0705152 and 1005417.
Modeling gene regulatory network motifs using Statecharts.
Fioravanti, Fabio; Helmer-Citterich, Manuela; Nardelli, Enrico
2012-03-28
Gene regulatory networks are widely used by biologists to describe the interactions among genes, proteins and other components at the intra-cellular level. Recently, a great effort has been devoted to give gene regulatory networks a formal semantics based on existing computational frameworks.For this purpose, we consider Statecharts, which are a modular, hierarchical and executable formal model widely used to represent software systems. We use Statecharts for modeling small and recurring patterns of interactions in gene regulatory networks, called motifs. We present an improved method for modeling gene regulatory network motifs using Statecharts and we describe the successful modeling of several motifs, including those which could not be modeled or whose models could not be distinguished using the method of a previous proposal.We model motifs in an easy and intuitive way by taking advantage of the visual features of Statecharts. Our modeling approach is able to simulate some interesting temporal properties of gene regulatory network motifs: the delay in the activation and the deactivation of the "output" gene in the coherent type-1 feedforward loop, the pulse in the incoherent type-1 feedforward loop, the bistability nature of double positive and double negative feedback loops, the oscillatory behavior of the negative feedback loop, and the "lock-in" effect of positive autoregulation. We present a Statecharts-based approach for the modeling of gene regulatory network motifs in biological systems. The basic motifs used to build more complex networks (that is, simple regulation, reciprocal regulation, feedback loop, feedforward loop, and autoregulation) can be faithfully described and their temporal dynamics can be analyzed.
Neural network approaches for noisy language modeling.
Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid
2013-11-01
Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.
A quantum-implementable neural network model
Chen, Jialin; Wang, Lingli; Charbon, Edoardo
2017-10-01
A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.
Telestroke network business model strategies.
Fanale, Christopher V; Demaerschalk, Bart M
2012-10-01
Our objective is to summarize the evidence that supports the reliability of telemedicine for diagnosis and efficacy in acute stroke treatment, identify strategies for funding the development of a telestroke network, and to present issues with respect to economic sustainability, cost effectiveness, and the status of reimbursement for telestroke. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for
A Neuron Model for FPGA Spiking Neuronal Network Implementation
Directory of Open Access Journals (Sweden)
BONTEANU, G.
2011-11-01
Full Text Available We propose a neuron model, able to reproduce the basic elements of the neuronal dynamics, optimized for digital implementation of Spiking Neural Networks. Its architecture is structured in two major blocks, a datapath and a control unit. The datapath consists of a membrane potential circuit, which emulates the neuronal dynamics at the soma level, and a synaptic circuit used to update the synaptic weight according to the spike timing dependent plasticity (STDP mechanism. The proposed model is implemented into a Cyclone II-Altera FPGA device. Our results indicate the neuron model can be used to build up 1K Spiking Neural Networks on reconfigurable logic suport, to explore various network topologies.
Windridge, David; Kittler, Josef
2010-01-01
As well as having the ability to formulate models of the world capable of experimental falsification, it is evident that human cognitive capability embraces some degree of representational plasticity, having the scope (at least in infancy) to modify the primitives in terms of which the world is delineated. We hence employ the term 'cognitive bootstrapping' to refer to the autonomous updating of an embodied agent's perceptual framework in response to the perceived requirements of the environment in such a way as to retain the ability to refine the environment model in a consistent fashion across perceptual changes.We will thus argue that the concept of cognitive bootstrapping is epistemically ill-founded unless there exists an a priori percept/motor interrelation capable of maintaining an empirical distinction between the various possibilities of perceptual categorization and the inherent uncertainties of environment modeling.As an instantiation of this idea, we shall specify a very general, logically-inductive model of perception-action learning capable of compact re-parameterization of the percept space. In consequence of the a priori percept/action coupling, the novel perceptual state transitions so generated always exist in bijective correlation with a set of novel action states, giving rise to the required empirical validation criterion for perceptual inferences. Environmental description is correspondingly accomplished in terms of progressively higher-level affordance conjectures which are likewise validated by exploratory action.Application of this mechanism within simulated perception-action environments indicates that, as well as significantly reducing the size and specificity of the a priori perceptual parameter-space, the method can significantly reduce the number of iterations required for accurate convergence of the world-model. It does so by virtue of the active learning characteristics implicit in the notion of cognitive bootstrapping.
Directory of Open Access Journals (Sweden)
Damian M Cummings
2010-05-01
Full Text Available Since the identification of the gene responsible for HD (Huntington's disease, many genetic mouse models have been generated. Each employs a unique approach for delivery of the mutated gene and has a different CAG repeat length and background strain. The resultant diversity in the genetic context and phenotypes of these models has led to extensive debate regarding the relevance of each model to the human disorder. Here, we compare and contrast the striatal synaptic phenotypes of two models of HD, namely the YAC128 mouse, which carries the full-length huntingtin gene on a yeast artificial chromosome, and the CAG140 KI*** (knock-in mouse, which carries a human/mouse chimaeric gene that is expressed in the context of the mouse genome, with our previously published data obtained from the R6/2 mouse, which is transgenic for exon 1 mutant huntingtin. We show that striatal MSNs (medium-sized spiny neurons in YAC128 and CAG140 KI mice have similar electrophysiological phenotypes to that of the R6/2 mouse. These include a progressive increase in membrane input resistance, a reduction in membrane capacitance, a lower frequency of spontaneous excitatory postsynaptic currents and a greater frequency of spontaneous inhibitory postsynaptic currents in a subpopulation of striatal neurons. Thus, despite differences in the context of the inserted gene between these three models of HD, the primary electrophysiological changes observed in striatal MSNs are consistent. The outcomes suggest that the changes are due to the expression of mutant huntingtin and such alterations can be extended to the human condition.
Complex networks under dynamic repair model
Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao
2018-01-01
Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.
Markov State Models of gene regulatory networks.
Chu, Brian K; Tse, Margaret J; Sato, Royce R; Read, Elizabeth L
2017-02-06
Gene regulatory networks with dynamics characterized by multiple stable states underlie cell fate-decisions. Quantitative models that can link molecular-level knowledge of gene regulation to a global understanding of network dynamics have the potential to guide cell-reprogramming strategies. Networks are often modeled by the stochastic Chemical Master Equation, but methods for systematic identification of key properties of the global dynamics are currently lacking. The method identifies the number, phenotypes, and lifetimes of long-lived states for a set of common gene regulatory network models. Application of transition path theory to the constructed Markov State Model decomposes global dynamics into a set of dominant transition paths and associated relative probabilities for stochastic state-switching. In this proof-of-concept study, we found that the Markov State Model provides a general framework for analyzing and visualizing stochastic multistability and state-transitions in gene networks. Our results suggest that this framework-adopted from the field of atomistic Molecular Dynamics-can be a useful tool for quantitative Systems Biology at the network scale.
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-12-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
Consistency of non-flat $\\Lambda$CDM model with the new result from BOSS
Kumar, Suresh
2015-01-01
Using 137,562 quasars in the redshift range $2.1\\leq z\\leq3.5$ from the Data Release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate $H(z=2.34)=222\\pm7$ km/s/Mpc of Universe, and reported that this value is in tension with the predictions of flat $\\Lambda$CDM model at around 2.5$\\sigma$ level. In this letter, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in non-flat $\\Lambda$CDM model with positive curvature. However, this idea confronts with the inflation paradigm which predicts almost a spatially flat Universe. Nevertheless, the theoretical consistency of the non-flat $\\Lambda$CDM model with the new result from BOSS deserves attention of the community.
Self-Consistent Atmosphere Models of the Most Extreme Hot Jupiters
Lothringer, Joshua; Barman, Travis
2018-01-01
We present a detailed look at self-consistent PHOENIX atmosphere models of the most highly irradiated hot Jupiters known to exist. These hot Jupiters typically have equilibrium temperatures approaching and sometimes exceeding 3000 K, orbiting A, F, and early-G type stars on orbits less than 0.03 AU (10x closer than Mercury is to the Sun). The most extreme example, KELT-9b, is the hottest known hot Jupiter with a measured dayside temperature of 4600 K. Many of the planets we model have recently attracted attention with high profile discoveries, including temperature inversions in WASP-33b and WASP-121, changing phase curve offsets possibly caused by magnetohydrodymanic effects in HAT-P-7b, and TiO in WASP-19b. Our modeling provides a look at the a priori expectations for these planets and helps us understand these recent discoveries. We show that, in the hottest cases, all molecules are dissociated down to relatively high pressures. These planets may have detectable temperature inversions, more akin to thermospheres than stratospheres in that an optical absorber like TiO or VO is not needed. Instead, the inversions are created by a lack of cooling in the IR combined with heating from atoms and ions at UV and blue optical wavelengths. We also reevaluate some of the assumptions that have been made in retrieval analyses of these planets.
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-05-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
Zimmermann, Eva; Seifert, Udo
2015-02-01
Many single-molecule experiments for molecular motors comprise not only the motor but also large probe particles coupled to it. The theoretical analysis of these assays, however, often takes into account only the degrees of freedom representing the motor. We present a coarse-graining method that maps a model comprising two coupled degrees of freedom which represent motor and probe particle to such an effective one-particle model by eliminating the dynamics of the probe particle in a thermodynamically and dynamically consistent way. The coarse-grained rates obey a local detailed balance condition and reproduce the net currents. Moreover, the average entropy production as well as the thermodynamic efficiency is invariant under this coarse-graining procedure. Our analysis reveals that only by assuming unrealistically fast probe particles, the coarse-grained transition rates coincide with the transition rates of the traditionally used one-particle motor models. Additionally, we find that for multicyclic motors the stall force can depend on the probe size. We apply this coarse-graining method to specific case studies of the F(1)-ATPase and the kinesin motor.
Performance modeling, stochastic networks, and statistical multiplexing
Mazumdar, Ravi R
2013-01-01
This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan
Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech
Directory of Open Access Journals (Sweden)
Petráš Rudolf
2014-06-01
Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters
The self-consistent field model for Fermi systems with account of three-body interactions
Directory of Open Access Journals (Sweden)
Yu.M. Poluektov
2015-12-01
Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.
Self-consistent coupling of DSMC method and SOLPS code for modeling tokamak particle exhaust
Bonelli, F.; Varoutis, S.; Coster, D.; Day, C.; Zanino, R.; Contributors, JET
2017-06-01
In this work, an investigation of the neutral gas flow in the JET sub-divertor area is presented, with respect to the interaction between the plasma side and the pumping side. The edge plasma side is simulated with the SOLPS code, while the sub-divertor area is modeled by means of the direct simulation Monte Carlo (DSMC) method, which in the last few years has proved well able to describe rarefied, collisional flows in tokamak sub-divertor structures. Four different plasma scenarios have been selected, and for each of them a user-defined, iterative procedure between SOLPS and DSMC has been established, using the neutral flux as the key communication term between the two codes. The goal is to understand and quantify the mutual influence between the two regions in a self-consistent manner, that is to say, how the particle exhaust pumping system controls the upstream plasma conditions. Parametric studies of the flow conditions in the sub-divertor, including additional flow outlets and variations of the cryopump capture coefficient, have been performed as well, in order to understand their overall impact on the flow field. The DSMC analyses resulted in the calculation of both the macroscopic quantities—i.e. temperature, number density and pressure—and the recirculation fluxes towards the plasma chamber. The consistent values for the recirculation rates were found to be smaller than those according to the initial standard assumption made by SOLPS.
A coevolving model based on preferential triadic closure for social media networks.
Li, Menghui; Zou, Hailin; Guan, Shuguang; Gong, Xiaofeng; Li, Kun; Di, Zengru; Lai, Choy-Heng
2013-01-01
The dynamical origin of complex networks, i.e., the underlying principles governing network evolution, is a crucial issue in network study. In this paper, by carrying out analysis to the temporal data of Flickr and Epinions-two typical social media networks, we found that the dynamical pattern in neighborhood, especially the formation of triadic links, plays a dominant role in the evolution of networks. We thus proposed a coevolving dynamical model for such networks, in which the evolution is only driven by the local dynamics-the preferential triadic closure. Numerical experiments verified that the model can reproduce global properties which are qualitatively consistent with the empirical observations.
Thermal states of neutron stars with a consistent model of interior
Fortin, M.; Taranto, G.; Burgio, G. F.; Haensel, P.; Schulze, H.-J.; Zdunik, J. L.
2018-01-01
We model the thermal states of both isolated neutron stars and accreting neutron stars in X-ray transients in quiescence and confront them with observations. We use an equation of state calculated using realistic two-body and three-body nucleon interactions, and superfluid nucleon gaps obtained using the same microscopic approach in the BCS approximation. Consistency with low-luminous accreting neutron stars is obtained, as the direct Urca process is operating in neutron stars with mass larger than 1.1M⊙ for the employed equation of state. In addition, proton superfluidity and sufficiently weak neutron superfluidity, obtained using a scaling factor for the gaps, are necessary to explain the cooling of middle-aged neutron stars and to obtain a realistic distribution of neutron star masses.
A Self-consistency Approach to Multinomial Logit Model with Random Effects.
Wang, Shufang; Tsodikov, Alex
2010-07-01
The computation in the multinomial logit mixed effects model is costly especially when the response variable has a large number of categories, since it involves high-dimensional integration and maximization. Tsodikov and Chefo (2008) developed a stable MLE approach to problems with independent observations, based on generalized self-consistency and quasi-EM algorithm developed in Tsodikov (2003). In this paper, we apply the idea to clustered multinomial response to simplify the maximization step. The method transforms the complex multinomial likelihood to Poisson-type likelihood and hence allows for the estimates to be obtained iteratively solving a set of independent low-dimensional problems. The methodology is applied to real data and studied by simulations. While maximization is simplified, numerical integration remains the dominant challenge to computational efficiency.
Logical Modeling and Dynamical Analysis of Cellular Networks.
Abou-Jaoudé, Wassim; Traynard, Pauline; Monteiro, Pedro T; Saez-Rodriguez, Julio; Helikar, Tomáš; Thieffry, Denis; Chaouiya, Claudine
2016-01-01
The logical (or logic) formalism is increasingly used to model regulatory and signaling networks. Complementing these applications, several groups contributed various methods and tools to support the definition and analysis of logical models. After an introduction to the logical modeling framework and to several of its variants, we review here a number of recent methodological advances to ease the analysis of large and intricate networks. In particular, we survey approaches to determine model attractors and their reachability properties, to assess the dynamical impact of variations of external signals, and to consistently reduce large models. To illustrate these developments, we further consider several published logical models for two important biological processes, namely the differentiation of T helper cells and the control of mammalian cell cycle.
Modeling acquaintance networks based on balance theory
Directory of Open Access Journals (Sweden)
Vukašinović Vida
2014-09-01
Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models
Buchanan, John J; Dean, Noah
2014-02-01
The experiment undertaken was designed to elucidate the impact of model skill level on observational learning processes. The task was bimanual circle tracing with a 90° relative phase lead of one hand over the other hand. Observer groups watched videos of either an instruction model, a discovery model, or a skilled model. The instruction and skilled model always performed the task with the same movement strategy, the right-arm traced clockwise and the left-arm counterclockwise around circle templates with the right-arm leading. The discovery model used several movement strategies (tracing-direction/hand-lead) during practice. Observation of the instruction and skilled model provided a significant benefit compared to the discovery model when performing the 90° relative phase pattern in a post-observation test. The observers of the discovery model had significant room for improvement and benefited from post-observation practice of the 90° pattern. The benefit of a model is found in the consistency with which that model uses the same movement strategy, and not within the skill level of the model. It is the consistency in strategy modeled that allows observers to develop an abstract perceptual representation of the task that can be implemented into a coordinated action. Theoretically, the results show that movement strategy information (relative motion direction, hand lead) and relative phase information can be detected through visual perception processes and be successfully mapped to outgoing motor commands within an observational learning context. Copyright © 2013 Elsevier B.V. All rights reserved.
A Fully Self-consistent Multi-layered Model of Jupiter
Kong, Dali; Zhang, Keke; Schubert, Gerald
2016-08-01
We construct a three-dimensional, fully self-consistent, multi-layered, non-spheroidal model of Jupiter consisting of an inner core, a metallic electrically conducting dynamo region, and an outer molecular electrically insulating envelope. We assume that the Jovian zonal winds are on cylinders parallel to the rotation axis but, due to the effect of magnetic braking, are confined within the outer molecular envelope. We also assume that the location of the molecular-metallic interface is characterized by its equatorial radius {{HR}}e, where R e is the equatorial radius of Jupiter at the 1 bar pressure level and H is treated as a parameter of the model. We solve the relevant mathematical problem via a perturbation approach. The leading-order problem determines the density, size, and shape of the inner core, the irregular shape of the 1 bar pressure level, and the internal structure of Jupiter that accounts for the full effect of rotational distortion, but without the influence of the zonal winds; the next-order problem determines the variation of the gravitational field solely caused by the effect of the zonal winds on the rotationally distorted non-spheroidal Jupiter. The leading-order solution produces the known mass, the known equatorial and polar radii, and the known zonal gravitational coefficient J 2 of Jupiter within their error bars; it also yields the coefficients J 4 and J 6 within about 5% accuracy, the core equatorial radius 0.09{R}e and the core density {ρ }c=2.0× {10}4 {{kg}} {{{m}}}-3 corresponding to 3.73 Earth masses; the next-order solution yields the wind-induced variation of the zonal gravitational coefficients of Jupiter.
Flood routing modelling with Artificial Neural Networks
Directory of Open Access Journals (Sweden)
R. Peters
2006-01-01
Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.
Optimal transportation networks models and theory
Bernot, Marc; Morel, Jean-Michel
2009-01-01
The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.
Energy Technology Data Exchange (ETDEWEB)
BRANNON,REBECCA M.
2000-11-01
A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion
Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A
2017-04-01
Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.
Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling
Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang
2017-12-01
Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.
Self-consistent modeling of CFETR baseline scenarios for steady-state operation
Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team
2017-07-01
Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling.
Pera, H; Kleijn, J M; Leermakers, F A M
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus kc and k̄ and the preferred monolayer curvature J(0)(m), and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of kc and the area compression modulus kA are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k̄ and J(0)(m) can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k̄ and J(0)(m) change sign with relevant parameter changes. Although typically k̄ 0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks.
Modelling complex networks by random hierarchical graphs
Directory of Open Access Journals (Sweden)
M.Wróbel
2008-06-01
Full Text Available Numerous complex networks contain special patterns, called network motifs. These are specific subgraphs, which occur oftener than in randomized networks of Erdős-Rényi type. We choose one of them, the triangle, and build a family of random hierarchical graphs, being Sierpiński gasket-based graphs with random "decorations". We calculate the important characteristics of these graphs - average degree, average shortest path length, small-world graph family characteristics. They depend on probability of decorations. We analyze the Ising model on our graphs and describe its critical properties using a renormalization-group technique.
A Network Model of Credit Risk Contagion
Directory of Open Access Journals (Sweden)
Ting-Qiang Chen
2012-01-01
Full Text Available A network model of credit risk contagion is presented, in which the effect of behaviors of credit risk holders and the financial market regulators and the network structure are considered. By introducing the stochastic dominance theory, we discussed, respectively, the effect mechanisms of the degree of individual relationship, individual attitude to credit risk contagion, the individual ability to resist credit risk contagion, the monitoring strength of the financial market regulators, and the network structure on credit risk contagion. Then some derived and proofed propositions were verified through numerical simulations.
Directory of Open Access Journals (Sweden)
Elston Timothy C
2004-03-01
Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Deep space network software cost estimation model
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
Continuum Modeling of Biological Network Formation
Albi, Giacomo
2017-04-10
We present an overview of recent analytical and numerical results for the elliptic–parabolic system of partial differential equations proposed by Hu and Cai, which models the formation of biological transportation networks. The model describes the pressure field using a Darcy type equation and the dynamics of the conductance network under pressure force effects. Randomness in the material structure is represented by a linear diffusion term and conductance relaxation by an algebraic decay term. We first introduce micro- and mesoscopic models and show how they are connected to the macroscopic PDE system. Then, we provide an overview of analytical results for the PDE model, focusing mainly on the existence of weak and mild solutions and analysis of the steady states. The analytical part is complemented by extensive numerical simulations. We propose a discretization based on finite elements and study the qualitative properties of network structures for various parameter values.
Stochastic modeling and analysis of telecoms networks
Decreusefond, Laurent
2012-01-01
This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an
Neural networks as models of psychopathology.
Aakerlund, L; Hemmingsen, R
1998-04-01
Neural network modeling is situated between neurobiology, cognitive science, and neuropsychology. The structural and functional resemblance with biological computation has made artificial neural networks (ANN) useful for exploring the relationship between neurobiology and computational performance, i.e., cognition and behavior. This review provides an introduction to the theory of ANN and how they have linked theories from neurobiology and psychopathology in schizophrenia, affective disorders, and dementia.
Decomposed Implicit Models of Piecewise - Linear Networks
Directory of Open Access Journals (Sweden)
J. Brzobohaty
1992-05-01
Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.
Strazza, Marianne; Maubert, Monique E.; Pirrone, Vanessa; Wigdahl, Brian; Nonnemacher, Michael R.
2016-01-01
Background Numerous systems exist to model the blood-brain barrier (BBB) with the goal of understanding the regulation of passage into the central nervous system (CNS) and the potential impact of selected insults on BBB function. These models typically focus on the intrinsic cellular properties of the BBB, yet studies of peripheral cell migration are often excluded due to technical restraints. New Method This method allows for the study of in vitro cellular transmigration following exposure to any treatment of interest through optimization of co-culture conditions for the human brain microvascular endothelial cells (BMEC) cell line, hCMEC/D3, and primary human peripheral blood mononuclear cells (PBMCs). Results hCMEC/D3 cells form functionally confluent monolayers on collagen coated polytetrafluoroethylene (PTFE) transwell inserts, as assessed by microscopy and tracer molecule (FITC-dextran (FITC-D)) exclusion. Two components of complete hCMEC/D3 media, EBM-2 base-media and hydrocortisone (HC), were determined to be cytotoxic to PBMCs. By combining the remaining components of complete hCMEC/D3 media with complete PBMC media a resulting co-culture media was established for use in hCMEC/D3 – PBMC co-culture functional assays. Comparison with existing methods Through this method, issues of extensive differences in culture media conditions are resolved allowing for treatments and functional assays to be conducted on the two cell populations co-cultured simultaneously. Conclusion Described here is an in vitro co-culture model of the BBB, consisting of the hCMEC/D3 cell line and primary human PBMCs. The co-culture media will now allow for the study of exposure to potential insults to BBB function over prolonged time courses. PMID:27216631
Hazard-consistent ground motions generated with a stochastic fault-rupture model
Energy Technology Data Exchange (ETDEWEB)
Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-12-15
Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to
Green Network Planning Model for Optical Backbones
DEFF Research Database (Denmark)
Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir; Jensen, Michael
2010-01-01
on the environment in general. In network planning there are existing planning models focused on QoS provisioning, investment minimization or combinations of both and other parameters. But there is a lack of a model for designing green optical backbones. This paper presents novel ideas to be able to define...
Empirical generalization assessment of neural network models
DEFF Research Database (Denmark)
Larsen, Jan; Hansen, Lars Kai
1995-01-01
This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...
Nikelshpur, Dmitry O.
2014-01-01
Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…
Bhardwaj, Nitin; Yan, Koon-Kiu; Gerstein, Mark B.
2010-01-01
Gene regulatory networks have been shown to share some common aspects with commonplace social governance structures. Thus, we can get some intuition into their organization by arranging them into well-known hierarchical layouts. These hierarchies, in turn, can be placed between the extremes of autocracies, with well-defined levels and clear chains of command, and democracies, without such defined levels and with more co-regulatory partnerships between regulators. In general, the presence of partnerships decreases the variation in information flow amongst nodes within a level, more evenly distributing stress. Here we study various regulatory networks (transcriptional, modification, and phosphorylation) for five diverse species, Escherichia coli to human. We specify three levels of regulators—top, middle, and bottom—which collectively govern the non-regulator targets lying in the lowest fourth level. We define quantities for nodes, levels, and entire networks that measure their degree of collaboration and autocratic vs. democratic character. We show individual regulators have a range of partnership tendencies: Some regulate their targets in combination with other regulators in local instantiations of democratic structure, whereas others regulate mostly in isolation, in more autocratic fashion. Overall, we show that in all networks studied the middle level has the highest collaborative propensity and coregulatory partnerships occur most frequently amongst midlevel regulators, an observation that has parallels in corporate settings where middle managers must interact most to ensure organizational effectiveness. There is, however, one notable difference between networks in different species: The amount of collaborative regulation and democratic character increases markedly with overall genomic complexity. PMID:20351254
Models of network reliability analysis, combinatorics, and Monte Carlo
Gertsbakh, Ilya B
2009-01-01
Unique in its approach, Models of Network Reliability: Analysis, Combinatorics, and Monte Carlo provides a brief introduction to Monte Carlo methods along with a concise exposition of reliability theory ideas. From there, the text investigates a collection of principal network reliability models, such as terminal connectivity for networks with unreliable edges and/or nodes, network lifetime distribution in the process of its destruction, network stationary behavior for renewable components, importance measures of network elements, reliability gradient, and network optimal reliability synthesis
Delay and Disruption Tolerant Networking MACHETE Model
Segui, John S.; Jennings, Esther H.; Gao, Jay L.
2011-01-01
To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity
A comprehensive Network Security Risk Model for process control networks.
Henry, Matthew H; Haimes, Yacov Y
2009-02-01
The risk of cyber attacks on process control networks (PCN) is receiving significant attention due to the potentially catastrophic extent to which PCN failures can damage the infrastructures and commodity flows that they support. Risk management addresses the coupled problems of (1) reducing the likelihood that cyber attacks would succeed in disrupting PCN operation and (2) reducing the severity of consequences in the event of PCN failure or manipulation. The Network Security Risk Model (NSRM) developed in this article provides a means of evaluating the efficacy of candidate risk management policies by modeling the baseline risk and assessing expectations of risk after the implementation of candidate measures. Where existing risk models fall short of providing adequate insight into the efficacy of candidate risk management policies due to shortcomings in their structure or formulation, the NSRM provides model structure and an associated modeling methodology that captures the relevant dynamics of cyber attacks on PCN for risk analysis. This article develops the NSRM in detail in the context of an illustrative example.
Personalized Learning Network Teaching Model
Feng, Zhou
Adaptive learning system on the salient features, expounded personalized learning is adaptive learning system adaptive to learners key to learning. From the perspective of design theory, put forward an adaptive learning system to learn design thinking individual model, and using data mining techniques, the initial establishment of personalized adaptive systems model of learning.
Directory of Open Access Journals (Sweden)
P.-P. Mathieu
2012-08-01
Full Text Available The terrestrial biosphere is currently a strong sink for anthropogenic CO2 emissions. Through the radiative properties of CO2, the strength of this sink has a direct influence on the radiative budget of the global climate system. The accurate assessment of this sink and its evolution under a changing climate is, hence, paramount for any efficient management strategies of the terrestrial carbon sink to avoid dangerous climate change. Unfortunately, simulations of carbon and water fluxes with terrestrial biosphere models exhibit large uncertainties. A considerable fraction of this uncertainty reflects uncertainty in the parameter values of the process formulations within the models. This paper describes the systematic calibration of the process parameters of a terrestrial biosphere model against two observational data streams: remotely sensed FAPAR (fraction of absorbed photosynthetically active radiation provided by the MERIS (ESA's Medium Resolution Imaging Spectrometer sensor and in situ measurements of atmospheric CO2 provided by the GLOBALVIEW flask sampling network. We use the Carbon Cycle Data Assimilation System (CCDAS to systematically calibrate some 70 parameters of the terrestrial BETHY (Biosphere Energy Transfer Hydrology model. The simultaneous assimilation of all observations provides parameter estimates and uncertainty ranges that are consistent with the observational information. In a subsequent step these parameter uncertainties are propagated through the model to uncertainty ranges for predicted carbon fluxes. We demonstrate the consistent assimilation at global scale, where the global MERIS FAPAR product and atmospheric CO2 are used simultaneously. The assimilation improves the match to independent observations. We quantify how MERIS data improve the accuracy of the current and future (net and gross carbon flux estimates (within and beyond the assimilation period. We further demonstrate the use of an interactive mission benefit
Xiong, Qingrong; Baychev, Todor; Jivkov, Andrey
2016-01-01
Pore network models have been applied widely for simulating a variety of different physical and chemical processes, including phase exchange, non-Newtonian displacement, non-Darcy flow, reactive transport and thermodynamically consistent oil layers. The realism of such modelling, i.e. the credibility of their predictions, depends to a large extent on the quality of the correspondence between the pore space of a given medium and the pore network constructed as its representation. The main expe...
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.
2014-02-01
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus kc and bar{k} and the preferred monolayer curvature J_0^m, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of kc and the area compression modulus kA are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for bar{k} and J_0^m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both bar{k} and J_0^m change sign with relevant parameter changes. Although typically bar{k}PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J_0^m ≫ 0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks.
Toward self-consistent tectono-magmatic numerical model of rift-to-ridge transition
Gerya, Taras; Bercovici, David; Liao, Jie
2017-04-01
Natural data from modern and ancient lithospheric extension systems suggest three-dimensional (3D) character of deformation and complex relationship between magmatism and tectonics during the entire rift-to-ridge transition. Therefore, self-consistent high-resolution 3D magmatic-thermomechanical numerical approaches stand as a minimum complexity requirement for modeling and understanding of this transition. Here we present results from our new high-resolution 3D finite-difference marker-in-cell rift-to-ridge models, which account for magmatic accretion of the crust and use non-linear strain-weakened visco-plastic rheology of rocks that couples brittle/plastic failure and ductile damage caused by grain size reduction. Numerical experiments suggest that nucleation of rifting and ridge-transform patterns are decoupled in both space and time. At intermediate stages, two patterns can coexist and interact, which triggers development of detachment faults, failed rift arms, hyper-extended margins and oblique proto-transforms. En echelon rift patterns typically develop in the brittle upper-middle crust whereas proto-ridge and proto-transform structures nucleate in the lithospheric mantle. These deep proto-structures propagate upward, inter-connect and rotate toward a mature orthogonal ridge-transform patterns on the timescale of millions years during incipient thermal-magmatic accretion of the new oceanic-like lithosphere. Ductile damage of the extending lithospheric mantle caused by grain size reduction assisted by Zenner pinning plays critical role in rift-to-ridge transition by stabilizing detachment faults and transform structures. Numerical results compare well with observations from incipient spreading regions and passive continental margins.
Self-consistent model of a solid for the description of lattice and magnetic properties
Energy Technology Data Exchange (ETDEWEB)
Balcerzak, T., E-mail: t_balcerzak@uni.lodz.pl [Department of Solid State Physics, Faculty of Physics and Applied Informatics, University of Łódź, ulica Pomorska 149/153, 90-236 Łódź (Poland); Szałowski, K., E-mail: kszalowski@uni.lodz.pl [Department of Solid State Physics, Faculty of Physics and Applied Informatics, University of Łódź, ulica Pomorska 149/153, 90-236 Łódź (Poland); Jaščur, M. [Department of Theoretical Physics and Astrophysics, Faculty of Science, P. J. Šáfárik University, Park Angelinum 9, 041 54 Košice (Slovakia)
2017-03-15
In the paper a self-consistent theoretical description of the lattice and magnetic properties of a model system with magnetoelastic interaction is presented. The dependence of magnetic exchange integrals on the distance between interacting spins is assumed, which couples the magnetic and the lattice subsystem. The framework is based on summation of the Gibbs free energies for the lattice subsystem and magnetic subsystem. On the basis of minimization principle for the Gibbs energy, a set of equations of state for the system is derived. These equations of state combine the parameters describing the elastic properties (relative volume deformation) and the magnetic properties (magnetization changes). The formalism is extensively illustrated with the numerical calculations performed for a system of ferromagnetically coupled spins S=1/2 localized at the sites of simple cubic lattice. In particular, the significant influence of the magnetic subsystem on the elastic properties is demonstrated. It manifests itself in significant modification of such quantities as the relative volume deformation, thermal expansion coefficient or isothermal compressibility, in particular, in the vicinity of the magnetic phase transition. On the other hand, the influence of lattice subsystem on the magnetic one is also evident. It takes, for example, the form of dependence of the critical (Curie) temperature and magnetization itself on the external pressure, which is thoroughly investigated.
A self-consistent first-principle based approach to model carrier mobility in organic materials
Energy Technology Data Exchange (ETDEWEB)
Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2015-12-31
Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.
Modelling Users` Trust in Online Social Networks
Directory of Open Access Journals (Sweden)
Iacob Cătoiu
2014-02-01
Full Text Available Previous studies (McKnight, Lankton and Tripp, 2011; Liao, Lui and Chen, 2011 have shown the crucial role of trust when choosing to disclose sensitive information online. This is the case of online social networks users, who must disclose a certain amount of personal data in order to gain access to these online services. Taking into account privacy calculus model and the risk/benefit ratio, we propose a model of users’ trust in online social networks with four variables. We have adapted metrics for the purpose of our study and we have assessed their reliability and validity. We use a Partial Least Squares (PLS based structural equation modelling analysis, which validated all our initial assumptions, indicating that our three predictors (privacy concerns, perceived benefits and perceived risks explain 48% of the variation of users’ trust in online social networks, the resulting variable of our study. We also discuss the implications and further research opportunities of our study.
Model Microvascular Networks Can Have Many Equilibria.
Karst, Nathaniel J; Geddes, John B; Carr, Russell T
2017-03-01
We show that large microvascular networks with realistic topologies, geometries, boundary conditions, and constitutive laws can exhibit many steady-state flow configurations. This is in direct contrast to most previous studies which have assumed, implicitly or explicitly, that a given network can only possess one equilibrium state. While our techniques are general and can be applied to any network, we focus on two distinct network types that model human tissues: perturbed honeycomb networks and random networks generated from Voronoi diagrams. We demonstrate that the disparity between observed and predicted flow directions reported in previous studies might be attributable to the presence of multiple equilibria. We show that the pathway effect, in which hematocrit is steadily increased along a series of diverging junctions, has important implications for equilibrium discovery, and that our estimates of the number of equilibria supported by these networks are conservative. If a more complete description of the plasma skimming effect that captures red blood cell allocation at junctions with high feed hematocrit were to be obtained empirically, then the number of equilibria found by our approach would at worst remain the same and would in all likelihood increase significantly.
PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK
Directory of Open Access Journals (Sweden)
R. Hadapiningradja Kusumodestoni
2015-11-01
Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.
Artificial neural network cardiopulmonary modeling and diagnosis
Kangas, Lars J.; Keller, Paul E.
1997-01-01
The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.
Game-Theoretic Models of Information Overload in Social Networks
Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin
We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.
A Brownian model for multiclass queueing networks with finite buffers
Dai, Wanyang
2002-07-01
This paper is concerned with the heavy traffic behavior of a type of multiclass queueing networks with finite buffers. The network consists of d single server stations and is populated by K classes of customers. Each station has a finite capacity waiting buffer and operates under first-in first-out (FIFO) service discipline. The network is assumed to have a feedforward routing structure under a blocking scheme. A server stops working when the downstream buffer is full. The focus of this paper is on the Brownian model formulation. More specifically, the approximating Brownian model for the networks is proposed via the method of showing a pseudo-heavy-traffic limit theorem which states that the limit process is a reflecting Brownian motion (RBM) if the properly normalized d-dimensional workload process converges in distribution to a continuous process. Numerical algorithm with finite element method has been designed to effectively compute the solution of the Brownian model (W. Dai, Ph.D. thesis (1996); X. Shen et al. The finite element method for computing the stationary distribution of an SRBM in a hypercube with applications to finite buffer queueing networks, under revision for Queueing Systems).
Ising models of strongly coupled biological networks with multivariate interactions
Merchan, Lina; Nemenman, Ilya
2013-03-01
Biological networks consist of a large number of variables that can be coupled by complex multivariate interactions. However, several neuroscience and cell biology experiments have reported that observed statistics of network states can be approximated surprisingly well by maximum entropy models that constrain correlations only within pairs of variables. We would like to verify if this reduction in complexity results from intricacies of biological organization, or if it is a more general attribute of these networks. We generate random networks with p-spin (p > 2) interactions, with N spins and M interaction terms. The probability distribution of the network states is then calculated and approximated with a maximum entropy model based on constraining pairwise spin correlations. Depending on the M/N ratio and the strength of the interaction terms, we observe a transition where the pairwise approximation is very good to a region where it fails. This resembles the sat-unsat transition in constraint satisfaction problems. We argue that the pairwise model works when the number of highly probable states is small. We argue that many biological systems must operate in a strongly constrained regime, and hence we expect the pairwise approximation to be accurate for a wide class of problems. This research has been partially supported by the James S McDonnell Foundation grant No.220020321.
PROJECT ACTIVITY ANALYSIS WITHOUT THE NETWORK MODEL
Directory of Open Access Journals (Sweden)
S. Munapo
2012-01-01
Full Text Available
ENGLISH ABSTRACT: This paper presents a new procedure for analysing and managing activity sequences in projects. The new procedure determines critical activities, critical path, start times, free floats, crash limits, and other useful information without the use of the network model. Even though network models have been successfully used in project management so far, there are weaknesses associated with the use. A network is not easy to generate, and dummies that are usually associated with it make the network diagram complex – and dummy activities have no meaning in the original project management problem. The network model for projects can be avoided while still obtaining all the useful information that is required for project management. What are required are the activities, their accurate durations, and their predecessors.
AFRIKAANSE OPSOMMING: Die navorsing beskryf ’n nuwerwetse metode vir die ontleding en bestuur van die sekwensiële aktiwiteite van projekte. Die voorgestelde metode bepaal kritiese aktiwiteite, die kritieke pad, aanvangstye, speling, verhasing, en ander groothede sonder die gebruik van ’n netwerkmodel. Die metode funksioneer bevredigend in die praktyk, en omseil die administratiewe rompslomp van die tradisionele netwerkmodelle.
Quantum-Like Bayesian Networks for Modeling Decision Making
Directory of Open Access Journals (Sweden)
Catarina eMoreira
2016-01-01
Full Text Available In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios.
DEFF Research Database (Denmark)
Sogachev, Andrey; Kelly, Mark C.; Leclerc, Monique Y.
2012-01-01
A self-consistent two-equation closure treating buoyancy and plant drag effects has been developed, through consideration of the behaviour of the supplementary equation for the length-scale-determining variable in homogeneous turbulent flow. Being consistent with the canonical flow regimes of gri...
Algebraic Statistics for Network Models
2014-02-19
use algebra, combinatorics and Markov bases to give a constructing way of answering this question for ERGMs of interest. Question 2: How do we model...for every function. 06/06/13 Petrović. Manuscripts 8, 10. Invited lecture at the Scientific Session on Commutative Algebra and Combinatorics at the
Network Modeling and Simulation (NEMSE)
2013-07-01
Prioritized Packet Fragmentation", IEEE Trans. Multimedia , Oct. 2012. [13 SYSENG] . Defense Acquisition Guidebook, Chapter 4 System Engineering, and...2012 IEEE High Performance Extreme Computing Conference (HPEC) poster session [1 Ross]. Motivation Air Force Research Lab needs o Capability...is virtual. These eight virtualizations were: System-in-the-Loop (SITL) using OPNET Modeler, COPE, Field Programmable Gate Array ( FPGA Physical
Microencapsulation of model oil in wall matrices consisting of SPI and maltodextrins
Directory of Open Access Journals (Sweden)
Moshe Rosenberg
2016-01-01
Full Text Available Microencapsulation can provide means to entrap, protect and deliver nutritional lipids and related compounds that are susceptible to deterioration. The encapsulation of high lipid loads represents a challenge. The research has investigated the encapsulation by spray drying of a model oil, at a core load of 25–60%, in wall systems consisting of 2.5–10% SPI and 17.5–10% maltodextrin. In general, core-in-wall-emulsions exhibited unimodal PSD and a mean particle diameter < 0.5 µm. Dry microcapsules ranged in diameter from about 5 to less than 50 µm and exhibited only a limited extent of surface indentation. Core domains, in the form of protein-coated droplets, were embedded throughout the wall matrices and no visible cracks connecting these domains with the environment could be detected. Core retention ranged from 72.2 to 95.9% and was significantly affected (p < 0.05 by a combined influence of wall composition and initial core load. Microencapsulation efficiency, MEE, ranged from 25.4 to 91.6% and from 12.4 to 91.4% after 5 and 30 min of extraction, respectively (p < 0.05. MEE was significantly influenced by wall composition, extraction time, initial core load and DE value of the maltodextrins. Results indicated that wall solutions containing as low as 2.5% SPI and 17.5% maltodextrin were very effective as microencapsulating agents for high oil load. Results highlighted the functionality of SPI as microencapsulating agent in food applications and indicated the importance of carefully designing the composition of core-in-wall-emulsions.
Ferrier, K.; Mitrovica, J. X.
2015-12-01
In sedimentary deltas and fans, sea-level changes are strongly modulated by the deposition and compaction of marine sediment. The deposition of sediment and incorporation of water into the sedimentary pore space reduces sea level by increasing the elevation of the seafloor, which reduces the thickness of sea-water above the bed. In a similar manner, the compaction of sediment and purging of water out of the sedimentary pore space increases sea level by reducing the elevation of the seafloor, which increases the thickness of sea water above the bed. Here we show how one can incorporate the effects of sediment deposition and compaction into the global, gravitationally self-consistent sea-level model of Dalca et al. (2013). Incorporating sediment compaction requires accounting for only one additional quantity that had not been accounted for in Dalca et al. (2013): the mean porosity in the sediment column. We provide a general analytic framework for global sea-level changes including sediment deposition and compaction, and we demonstrate how sea level responds to deposition and compaction under one simple parameterization for compaction. The compaction of sediment generates changes in sea level only by changing the elevation of the seafloor. That is, sediment compaction does not affect the mass load on the crust, and therefore does not generate perturbations in crustal elevation or the gravity field that would further perturb sea level. These results have implications for understanding sedimentary effects on sea-level changes and thus for disentangling the various drivers of sea-level change. ReferencesDalca A.V., Ferrier K.L., Mitrovica J.X., Perron J.T., Milne G.A., Creveling J.R., 2013. On postglacial sea level - III. Incorporating sediment redistribution. Geophysical Journal International, doi: 10.1093/gji/ggt089.
Security Modeling on the Supply Chain Networks
Directory of Open Access Journals (Sweden)
Marn-Ling Shing
2007-10-01
Full Text Available In order to keep the price down, a purchaser sends out the request for quotation to a group of suppliers in a supply chain network. The purchaser will then choose a supplier with the best combination of price and quality. A potential supplier will try to collect the related information about other suppliers so he/she can offer the best bid to the purchaser. Therefore, confidentiality becomes an important consideration for the design of a supply chain network. Chen et al. have proposed the application of the Bell-LaPadula model in the design of a secured supply chain network. In the Bell-LaPadula model, a subject can be in one of different security clearances and an object can be in one of various security classifications. All the possible combinations of (Security Clearance, Classification pair in the Bell-LaPadula model can be thought as different states in the Markov Chain model. This paper extends the work done by Chen et al., provides more details on the Markov Chain model and illustrates how to use it to monitor the security state transition in the supply chain network.
An evolving model of online bipartite networks
Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang
2013-12-01
Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.
Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling
Energy Technology Data Exchange (ETDEWEB)
Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)
2014-02-14
To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic
Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers
Cartar, William; Mørk, Jesper; Hughes, Stephen
2017-08-01
We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also
Lan Liu; Ryan K. L. Ko; Guangming Ren; Xiaoping Xu
2017-01-01
As the adoption of Software Defined Networks (SDNs) grows, the security of SDN still has several unaddressed limitations. A key network security research area is in the study of malware propagation across the SDN-enabled networks. To analyze the spreading processes of network malware (e.g., viruses) in SDN, we propose a dynamic model with a time-varying community network, inspired by research models on the spread of epidemics in complex networks across communities. We assume subnets of the ne...
An autocatalytic network model for stock markets
Caetano, Marco Antonio Leonel; Yoneyama, Takashi
2015-02-01
The stock prices of companies with businesses that are closely related within a specific sector of economy might exhibit movement patterns and correlations in their dynamics. The idea in this work is to use the concept of autocatalytic network to model such correlations and patterns in the trends exhibited by the expected returns. The trends are expressed in terms of positive or negative returns within each fixed time interval. The time series derived from these trends is then used to represent the movement patterns by a probabilistic boolean network with transitions modeled as an autocatalytic network. The proposed method might be of value in short term forecasting and identification of dependencies. The method is illustrated with a case study based on four stocks of companies in the field of natural resource and technology.
Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks.
Vértes, Petra E; Alexander-Bloch, Aaron; Bullmore, Edward T
2014-10-05
Rich clubs arise when nodes that are 'rich' in connections also form an elite, densely connected 'club'. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Hydrometeorological network for flood monitoring and modeling
Efstratiadis, Andreas; Koussis, Antonis D.; Lykoudis, Spyros; Koukouvinos, Antonis; Christofides, Antonis; Karavokiros, George; Kappos, Nikos; Mamassis, Nikos; Koutsoyiannis, Demetris
2013-08-01
Due to its highly fragmented geomorphology, Greece comprises hundreds of small- to medium-size hydrological basins, in which often the terrain is fairly steep and the streamflow regime ephemeral. These are typically affected by flash floods, occasionally causing severe damages. Yet, the vast majority of them lack flow-gauging infrastructure providing systematic hydrometric data at fine time scales. This has obvious impacts on the quality and reliability of flood studies, which typically use simplistic approaches for ungauged basins that do not consider local peculiarities in sufficient detail. In order to provide a consistent framework for flood design and to ensure realistic predictions of the flood risk -a key issue of the 2007/60/EC Directive- it is essential to improve the monitoring infrastructures by taking advantage of modern technologies for remote control and data management. In this context and in the research project DEUCALION, we have recently installed and are operating, in four pilot river basins, a telemetry-based hydro-meteorological network that comprises automatic stations and is linked to and supported by relevant software. The hydrometric stations measure stage, using 50-kHz ultrasonic pulses or piezometric sensors, or both stage (piezometric) and velocity via acoustic Doppler radar; all measurements are being temperature-corrected. The meteorological stations record air temperature, pressure, relative humidity, wind speed and direction, and precipitation. Data transfer is made via GPRS or mobile telephony modems. The monitoring network is supported by a web-based application for storage, visualization and management of geographical and hydro-meteorological data (ENHYDRIS), a software tool for data analysis and processing (HYDROGNOMON), as well as an advanced model for flood simulation (HYDROGEIOS). The recorded hydro-meteorological observations are accessible over the Internet through the www-application. The system is operational and its
Keystone Business Models for Network Security Processors
Directory of Open Access Journals (Sweden)
Arthur Low
2013-07-01
Full Text Available Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor” models nor the silicon intellectual-property licensing (“IP-licensing” models allow small technology companies to successfully compete. This article describes an alternative approach that produces an ongoing stream of novel network security processors for niche markets through continuous innovation by both large and small companies. This approach, referred to here as the "business ecosystem model for network security processors", includes a flexible and reconfigurable technology platform, a “keystone” business model for the company that maintains the platform architecture, and an extended ecosystem of companies that both contribute and share in the value created by innovation. New opportunities for business model innovation by participating companies are made possible by the ecosystem model. This ecosystem model builds on: i the lessons learned from the experience of the first author as a senior integrated circuit architect for providers of public-key cryptography solutions and as the owner of a semiconductor startup, and ii the latest scholarly research on technology entrepreneurship, business models, platforms, and business ecosystems. This article will be of interest to all technology entrepreneurs, but it will be of particular interest to owners of small companies that provide security solutions and to specialized security professionals seeking to launch their own companies.
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2016-01-01
We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...
A Model of Mental State Transition Network
Xiang, Hua; Jiang, Peilin; Xiao, Shuang; Ren, Fuji; Kuroiwa, Shingo
Emotion is one of the most essential and basic attributes of human intelligence. Current AI (Artificial Intelligence) research is concentrating on physical components of emotion, rarely is it carried out from the view of psychology directly(1). Study on the model of artificial psychology is the first step in the development of human-computer interaction. As affective computing remains unpredictable, creating a reasonable mental model becomes the primary task for building a hybrid system. A pragmatic mental model is also the fundament of some key topics such as recognition and synthesis of emotions. In this paper a Mental State Transition Network Model(2) is proposed to detect human emotions. By a series of psychological experiments, we present a new way to predict coming human's emotions depending on the various current emotional states under various stimuli. Besides, people in different genders and characters are taken into consideration in our investigation. According to the psychological experiments data derived from 200 questionnaires, a Mental State Transition Network Model for describing the transitions in distribution among the emotions and relationships between internal mental situations and external are concluded. Further more the coefficients of the mental transition network model were achieved. Comparing seven relative evaluating experiments, an average precision rate of 0.843 is achieved using a set of samples for the proposed model.
UAV Trajectory Modeling Using Neural Networks
Xue, Min
2017-01-01
Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.
Propagation models for computing biochemical reaction networks
Henzinger, Thomas A; Mateescu, Maria
2011-01-01
We introduce propagation models, a formalism designed to support general and efficient data structures for the transient analysis of biochemical reaction networks. We give two use cases for propagation abstract data types: the uniformization method and numerical integration. We also sketch an implementation of a propagation abstract data type, which uses abstraction to approximate states.
Modelling crime linkage with Bayesian networks
de Zoete, J.; Sjerps, M.; Lagnado, D.; Fenton, N.
2015-01-01
When two or more crimes show specific similarities, such as a very distinct modus operandi, the probability that they were committed by the same offender becomes of interest. This probability depends on the degree of similarity and distinctiveness. We show how Bayesian networks can be used to model
Lagrangian modeling of switching electrical networks
Scherpen, Jacquelien M.A.; Jeltsema, Dimitri; Klaassens, J. Ben
2003-01-01
In this paper, a general and systematic method is presented to model topologically complete electrical networks, with or without multiple or single switches, within the Euler–Lagrange framework. Apart from the physical insight that can be obtained in this way, the framework has proven to be useful
Computational Modeling of Complex Protein Activity Networks
Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude
2017-01-01
Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a
Modeling Network Transition Constraints with Hypergraphs
DEFF Research Database (Denmark)
Harrod, Steven
2011-01-01
values. A directed hypergraph formulation is derived to address railway network sequencing constraints, and an experimental problem sample solved to estimate the magnitude of objective inflation when interaction effects are ignored. The model is used to demonstrate the value of advance scheduling...
A neural network model for texture discrimination.
Xing, J; Gerstein, G L
1993-01-01
A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context.
HIV lipodystrophy case definition using artificial neural network modelling
DEFF Research Database (Denmark)
Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew
2003-01-01
OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...
Propagating semantic information in biochemical network models
Directory of Open Access Journals (Sweden)
Schulz Marvin
2012-01-01
Full Text Available Abstract Background To enable automatic searches, alignments, and model combination, the elements of systems biology models need to be compared and matched across models. Elements can be identified by machine-readable biological annotations, but assigning such annotations and matching non-annotated elements is tedious work and calls for automation. Results A new method called "semantic propagation" allows the comparison of model elements based not only on their own annotations, but also on annotations of surrounding elements in the network. One may either propagate feature vectors, describing the annotations of individual elements, or quantitative similarities between elements from different models. Based on semantic propagation, we align partially annotated models and find annotations for non-annotated model elements. Conclusions Semantic propagation and model alignment are included in the open-source library semanticSBML, available on sourceforge. Online services for model alignment and for annotation prediction can be used at http://www.semanticsbml.org.
Distributed Bayesian Networks for User Modeling
DEFF Research Database (Denmark)
Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang
2006-01-01
The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...... efficiently combines distributed learner models without the need to exchange internal structure of local Bayesian networks, nor local evidence between the involved platforms....
Network traffic model using GIPP and GIBP
Lee, Yong Duk; Van de Liefvoort, Appie; Wallace, Victor L.
1998-10-01
In telecommunication networks, the correlated nature of teletraffic patterns can have significant impact on queuing measures such as queue length, blocking and delay. There is, however, not yet a good general analytical description which can easily incorporate the correlation effect of the traffic, while at the same time maintaining the ease of modeling. The authors have shown elsewhere, that the covariance structures of the generalized Interrupted Poisson Process (GIPP) and the generalized Interrupted Bernoulli Process (GIBP) have an invariance property which makes them reasonably general, yet algebraically manageable, models for representing correlated network traffic. The GIPP and GIBP have a surprisingly rich sets of parameters, yet these invariance properties enable us to easily incorporate the covariance function as well as the interarrival time distribution into the model to better matchobservations. In this paper, we show an application of GIPP and GIBP for matching an analytical model to observed or experimental data.
A network landscape model: stability analysis and numerical tests
Bonacini, E.; Groppi, M.; Monaco, R.; Soares, A. J.; Soresina, C.
2017-07-01
A Network Landscape Model (NLM) for the evaluation of the ecological trend of an environmental system is here presented and investigated. The model consists in a network of dynamical systems, where each node represents a single Landscape Unit (LU), endowed by a system of ODEs for two variables relevant to the production of bio-energy and to the percentage of green areas, respectively. The main goal of the paper consists in testing the relevance of connectivity between the LUs. For this purpose we consider first the Single LU Model (SLM) and investigate its equilibria and their stability, in terms of two bifurcation parameters. Then the network dynamics is theoretically investigated by means of a bifurcation analysis of a proper simplified differential system, that allows to understand how the coupling between different LUs modifies the asymptotic scenarios for the single LU model. Numerical simulations of NLM are performed, with reference to an environmental system in Northern Italy, and results are discussed in connection with SLM.
Directory of Open Access Journals (Sweden)
Li Wan
2014-03-01
Full Text Available In this work, we treat the Poisson-Nernst-Planck (PNP equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the “charge regulation” behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied
Wan, Li; Xu, Shixin; Liao, Maijia; Liu, Chun; Sheng, Ping
2014-01-01
In this work, we treat the Poisson-Nernst-Planck (PNP) equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB) equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the "charge regulation" behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO) effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied electric field, a
Tools and Models for Integrating Multiple Cellular Networks
Energy Technology Data Exchange (ETDEWEB)
Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.
2015-11-06
In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed
Qi, Jian; Xin, Jun; Wang, Hai-Long; Jing, Jie-Tai
2017-06-01
Not Available Project supported by the National Natural Science Foundation of China (Grants Nos. 91436211, 11374104, and 10974057), the Natural Science Foundation of Shanghai, China (Grant No. 17ZR1442900), the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20130076110011), the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, the Program for New Century Excellent Talents in University, China (Grant No. NCET-10-0383), the Shu Guang Project supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation, China (Grant No. 11SG26), the Shanghai Pujiang Program, China (Grant No. 09PJ1404400), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, National Basic Research Program of China (Grant No. 2016YFA0302103), and the Program of State Key Laboratory of Advanced 207 Optical Communication Systems and Networks, China (Grant No. 2016GZKF0JT003).
Model Predictive Control of Sewer Networks
DEFF Research Database (Denmark)
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik
2016-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....
Modelling dendritic ecological networks in space: anintegrated network perspective
Peterson, Erin E.; Ver Hoef, Jay M.; Isaak, Dan J.; Falke, Jeffrey A.; Fortin, Marie-Josée; Jordon, Chris E.; McNyset, Kristina; Monestiez, Pascal; Ruesch, Aaron S.; Sengupta, Aritra; Som, Nicholas; Steel, E. Ashley; Theobald, David M.; Torgersen, Christian E.; Wenger, Seth J.
2013-01-01
Dendritic ecological networks (DENs) are a unique form of ecological networks that exhibit a dendritic network topology (e.g. stream and cave networks or plant architecture). DENs have a dual spatial representation; as points within the network and as points in geographical space. Consequently, some analytical methods used to quantify relationships in other types of ecological networks, or in 2-D space, may be inadequate for studying the influence of structure and connectivity on ecological processes within DENs. We propose a conceptual taxonomy of network analysis methods that account for DEN characteristics to varying degrees and provide a synthesis of the different approaches within
Spatial Models and Networks of Living Systems
DEFF Research Database (Denmark)
Juul, Jeppe Søgaard
When studying the dynamics of living systems, insight can often be gained by developing a mathematical model that can predict future behaviour of the system or help classify system characteristics. However, in living cells, organisms, and especially groups of interacting individuals, a large number....... Such systems are known to be stabilized by spatial structure. Finally, I analyse data from a large mobile phone network and show that people who are topologically close in the network have similar communication patterns. This main part of the thesis is based on six different articles, which I have co...
On traffic modelling in GPRS networks
DEFF Research Database (Denmark)
Madsen, Tatiana Kozlova; Schwefel, Hans-Peter; Prasad, Ramjee
2005-01-01
Optimal design and dimensioning of wireless data networks, such as GPRS, requires the knowledge of traffic characteristics of different data services. This paper presents an in-detail analysis of an IP-level traffic measurements taken in an operational GPRS network. The data measurements reported...... here are done at the Gi interface. The aim of this paper is to reveal some key statistics of GPRS data applications and to validate if the existing traffic models can adequately describe traffic volume and inter-arrival time distribution for different services. Additionally, we present a method of user...
A improved Network Security Situation Awareness Model
Directory of Open Access Journals (Sweden)
Li Fangwei
2015-08-01
Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.
Using open sidewalls for modelling self-consistent lithosphere subduction dynamics
Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.
2012-01-01
Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
. In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling...
Studying the Consistency between and within the Student Mental Models for Atomic Structure
Zarkadis, Nikolaos; Papageorgiou, George; Stamovlasis, Dimitrios
2017-01-01
Science education research has revealed a number of student mental models for atomic structure, among which, the one based on Bohr's model seems to be the most dominant. The aim of the current study is to investigate the coherence of these models when students apply them for the explanation of a variety of situations. For this purpose, a set of…
A joint model of regulatory and metabolic networks
Directory of Open Access Journals (Sweden)
Vingron Martin
2006-07-01
Full Text Available Abstract Background Gene regulation and metabolic reactions are two primary activities of life. Although many works have been dedicated to study each system, the coupling between them is less well understood. To bridge this gap, we propose a joint model of gene regulation and metabolic reactions. Results We integrate regulatory and metabolic networks by adding links specifying the feedback control from the substrates of metabolic reactions to enzyme gene expressions. We adopt two alternative approaches to build those links: inferring the links between metabolites and transcription factors to fit the data or explicitly encoding the general hypotheses of feedback control as links between metabolites and enzyme expressions. A perturbation data is explained by paths in the joint network if the predicted response along the paths is consistent with the observed response. The consistency requirement for explaining the perturbation data imposes constraints on the attributes in the network such as the functions of links and the activities of paths. We build a probabilistic graphical model over the attributes to specify these constraints, and apply an inference algorithm to identify the attribute values which optimally explain the data. The inferred models allow us to 1 identify the feedback links between metabolites and regulators and their functions, 2 identify the active paths responsible for relaying perturbation effects, 3 computationally test the general hypotheses pertaining to the feedback control of enzyme expressions, 4 evaluate the advantage of an integrated model over separate systems. Conclusion The modeling results provide insight about the mechanisms of the coupling between the two systems and possible "design rules" pertaining to enzyme gene regulation. The model can be used to investigate the less well-probed systems and generate consistent hypotheses and predictions for further validation.
Self-consistent tight-binding model of B and N doping in graphene
DEFF Research Database (Denmark)
Pedersen, Thomas Garm; Pedersen, Jesper Goor
2013-01-01
Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...
Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.
2009-01-01
This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…
Consistent and Clear Reporting of Results from Diverse Modeling Techniques: The A3 Method
Directory of Open Access Journals (Sweden)
Scott Fortmann-Roe
2015-08-01
Full Text Available The measurement and reporting of model error is of basic importance when constructing models. Here, a general method and an R package, A3, are presented to support the assessment and communication of the quality of a model fit along with metrics of variable importance. The presented method is accurate, robust, and adaptable to a wide range of predictive modeling algorithms. The method is described along with case studies and a usage guide. It is shown how the method can be used to obtain more accurate models for prediction and how this may simultaneously lead to altered inferences and conclusions about the impact of potential drivers within a system.
Directory of Open Access Journals (Sweden)
Roy E Barnewall
2012-06-01
Full Text Available Repeated low-level exposures to Bacillus anthracis could occur before or after the remediation of an environmental release. This is especially true for persistent agents such as Bacillus anthracis spores, the causative agent of anthrax. Studies were conducted to examine aerosol methods needed for consistent daily low aerosol concentrations to deliver a low-dose (less than 106 colony forming units (CFU of B. anthracis spores and included a pilot feasibility characterization study, acute exposure study, and a multiple fifteen day exposure study. This manuscript focuses on the state-of-the-science aerosol methodologies used to generate and aerosolize consistent daily low aerosol concentrations and resultant low inhalation doses. The pilot feasibility characterization study determined that the aerosol system was consistent and capable of producing very low aerosol concentrations. In the acute, single day exposure experiment, targeted inhaled doses of 1 x 102, 1 x 103, 1 x 104, and 1 x 105 CFU were used. In the multiple daily exposure experiment, rabbits were exposed multiple days to targeted inhaled doses of 1 x 102, 1 x 103, and 1 x 104 CFU. In all studies, targeted inhaled doses remained fairly consistent from rabbit to rabbit and day to day. The aerosol system produced aerosolized spores within the optimal mass median aerodynamic diameter particle size range to reach deep lung alveoli. Consistency of the inhaled dose was aided by monitoring and recording respiratory parameters during the exposure with real-time plethysmography. Overall, the presented results show that the animal aerosol system was stable and highly reproducible between different studies and multiple exposure days.
Building footprint extraction from digital surface models using neural networks
Davydova, Ksenia; Cui, Shiyong; Reinartz, Peter
2016-10-01
Two-dimensional building footprints are a basis for many applications: from cartography to three-dimensional building models generation. Although, many methodologies have been proposed for building footprint extraction, this topic remains an open research area. Neural networks are able to model the complex relationships between the multivariate input vector and the target vector. Based on these abilities we propose a methodology using neural networks and Markov Random Fields (MRF) for automatic building footprint extraction from normalized Digital Surface Model (nDSM) and satellite images within urban areas. The proposed approach has mainly two steps. In the first step, the unary terms are learned for the MRF energy function by a four-layer neural network. The neural network is learned on a large set of patches consisting of both nDSM and Normalized Difference Vegetation Index (NDVI). Then prediction is performed to calculate the unary terms that are used in the MRF. In the second step, the energy function is minimized using a maxflow algorithm, which leads to a binary building mask. The building extraction results are compared with available ground truth. The comparison illustrates the efficiency of the proposed algorithm which can extract approximately 80% of buildings from nDSM with high accuracy.
Performance modeling, loss networks, and statistical multiplexing
Mazumdar, Ravi
2009-01-01
This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I
Directory of Open Access Journals (Sweden)
Jelena Grujić
Full Text Available The presence of costly cooperation between otherwise selfish actors is not trivial. A prominent mechanism that promotes cooperation is spatial population structure. However, recent experiments with human subjects report substantially lower level of cooperation then predicted by theoretical models. We analyze the data of such an experiment in which a total of 400 players play a Prisoner's Dilemma on a 4×4 square lattice in two treatments, either interacting via a fixed square lattice (15 independent groups or with a population structure changing after each interaction (10 independent groups. We analyze the statistics of individual decisions and infer in which way they can be matched with the typical models of evolutionary game theorists. We find no difference in the strategy updating between the two treatments. However, the strategy updates are distinct from the most popular models which lead to the promotion of cooperation as shown by computer simulations of the strategy updating. This suggests that the promotion of cooperation by population structure is not as straightforward in humans as often envisioned in theoretical models.
Dynamic queuing transmission model for dynamic network loading
DEFF Research Database (Denmark)
Raovic, Nevena; Nielsen, Otto Anker; Prato, Carlo Giacomo
2017-01-01
This paper presents a new macroscopic multi-class dynamic network loading model called Dynamic Queuing Transmission Model (DQTM). The model utilizes ‘good’ properties of the Dynamic Queuing Model (DQM) and the Link Transmission Model (LTM) by offering a DQM consistent with the kinematic wave theory...... and allowing for the representation of multiple vehicle classes, queue spillbacks and shock waves. The model assumes that a link is split into a moving part plus a queuing part, and p that traffic dynamics are given by a triangular fundamental diagram. A case-study is investigated and the DQTM is compared...... for two vehicle classes. Moreover, the results show that the travel time will be underestimated without considering the shock wave property...
Using structural equation modeling for network meta-analysis.
Tu, Yu-Kang; Wu, Yun-Chun
2017-07-14
Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
UAV Trajectory Modeling Using Neural Networks
Xue, Min
2017-01-01
Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural
Physically-consistent wall boundary conditions for the k-ω turbulence model
DEFF Research Database (Denmark)
Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl
2010-01-01
A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...
Badenes, C.; Hughes, J.P.; Bravo, E.; Langer, N.
2007-01-01
We explore the relationship between the models for progenitor systems of Type Ia supernovae and the properties of the supernova remnants that evolve after the explosion. Most models for Type Ia progenitors in the single-degenerate scenario predict substantial outflows during the presupernova
CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS
Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...
A model of yeast glycolysis based on a consistent kinetic characterisation of all its enzymes
Smallbone, K.; Messiha, H.L.; Carroll, K.M.; Winder, C.L.; Malys, N.; Dunn, W.B.; Murabito, E.; Swainston, N.; Dada, J.O.; Khan, F.; Pir, P.; Simeonidis, E.; Spasić, I.; Wishart, J.; Weichart, D.; Hayes, N.W.; Jameson, D.; Broomhead, D.S.; Oliver, S.G.; Gaskell, S.J.; McCarthy, J.E.G.; Paton, N.W.; Westerhoff, H.V.; Kell, D.B.; Mendes, P.
2013-01-01
We present an experimental and computational pipeline for the generation of kinetic models of metabolism, and demonstrate its application to glycolysis in Saccharomyces cerevisiae. Starting from an approximate mathematical model, we employ a "cycle of knowledge" strategy, identifying the steps with
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Veldkamp, Dick; Wedel-Heinen, Jens Jakob
evolution 4. atmospheric stability effects on wake deficit evolution and meandering The conducted research is to a large extent based on detailed wake investigations and reference data generated through computational fluid dynamics simulations, where the wind turbine rotor has been represented...... as a standalone flow-solver for the velocity and turbulence distribution, and power production in a wind farm. The performance of the standalone implementation is validated against field data, higher-order computational fluid dynamics models, as well as the most common engineering wake models in the wind industry....... 2. The EllipSys3D actuator line model, including the synthetic methods used to model atmospheric boundary layer shear and turbulence, is verified for modelling the evolution of wind turbine wake turbulence by comparison to field data and wind tunnel experiments. 3. A two-dimensional eddy viscosity...
Kinematic Structural Modelling in Bayesian Networks
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In
Systems biology of plant molecular networks: from networks to models
Valentim, F.L.
2015-01-01
Developmental processes are controlled by regulatory networks (GRNs), which are tightly coordinated networks of transcription factors (TFs) that activate and repress gene expression within a spatial and temporal context. In Arabidopsis thaliana, the key components and network structures of the GRNs
Silvennoinen, Annestiina; Terasvirta, Timo
2017-01-01
A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations are deterministically time-varying. Parameters of the model are estimated jointly using maximum likelihood. Consistency and asymptotic normality of maximum likelihood estimators is proved. Numerical aspects of the es...
Application of Waterman-Truell and the Dynamic Generalized Self-consistent Models on Concrete
Villarreal, A.; Solis-Najera, S.; Medina-Gómez, L.
Acoustic wave propagation in heterogeneous and dispersive media is a very complex phenomenon, where the phase velocity and attenuation are function of the material microstructure, and becomes frequency-dependent parameters. In this paper, the interaction between ultrasound waves and cement-paste specimens have been analyzed by two multiple scattering models, based on homogenization approach, and experimental data. The experimental phase velocity, attenuation and the dispersion results show good agreement with the theoretical models; the experimental phase velocity and attenuation decrease for higher w/c ratio, as predicted by the theoretical models.
The standard lateral gene transfer model is statistically consistent for pectinate four-taxon trees
DEFF Research Database (Denmark)
Sand, Andreas; Steel, Mike
2013-01-01
Evolutionary events such as incomplete lineage sorting and lateral gene transfers constitute major problems for inferring species trees from gene trees, as they can sometimes lead to gene trees which conflict with the underlying species tree. One particularly simple and efficient way to infer...... species trees from gene trees under such conditions is to combine three-taxon analyses for several genes using a majority vote approach. For incomplete lineage sorting this method is known to be statistically consistent; however, for lateral gene transfers it was recently shown that a zone...
Advances in dynamic network modeling in complex transportation systems
Ukkusuri, Satish V
2013-01-01
This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.
A NEURAL OSCILLATOR-NETWORK MODEL OF TEMPORAL PATTERN GENERATION
Schomaker, Lambert
Most contemporary neural network models deal with essentially static, perceptual problems of classification and transformation. Models such as multi-layer feedforward perceptrons generally do not incorporate time as an essential dimension, whereas biological neural networks are inherently temporal
Macroscopic Properties of Nuclei within Self-Consistent and Liquid Drop Models
Nerlo-Pomorska, B.; Sykut, J.
2004-03-01
A set of parameters of the relativistic-mean-field theory (RMFT) is obtained by adjusting the macroscopic part of the RMFT binding energies of 142 spherical even-even nuclei to the phenomenological Lublin-Strasbourg-Drop (LSD) model.
A Neural Model of Bilateral Negotiation Consisting of One and Two Issues
1991-09-01
The level of activity of the no offer node is hot displayed. 45 isp/ exam/ get/ save/ set/ clear cycle do input log iewstart quit reset run test (buyer...are underlined. 47 isp/ exam/ get/ save/ set/ clear cycle do input Tog ewstart quit reset run test (buyer model) disagree disagree recommended id price...nodes are underlined. 48 isp/ exam/ get7 save/ set/ clear cycle do input log ewstart quit reset run test (buyer model) disagree disagree recommended id
Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.
2009-01-01
Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.
Model of Opinion Spreading in Social Networks
Kanovsky, Igor
2011-01-01
We proposed a new model, which capture the main difference between information and opinion spreading. In information spreading additional exposure to certain information has a small effect. Contrary, when an actor is exposed to 2 opinioned actors the probability to adopt the opinion is significant higher than in the case of contact with one such actor (called by J. Kleinberg "the 0-1-2 effect"). In each time step if an actor does not have an opinion, we randomly choose 2 his network neighbors. If one of them has an opinion, the actor adopts opinion with some low probability, if two - with a higher probability. Opinion spreading was simulated on different real world social networks and similar random scale-free networks. The results show that small world structure has a crucial impact on tipping point time. The "0-1-2" effect causes a significant difference between ability of the actors to start opinion spreading. Actor is an influencer according to his topological position in the network.
Genome scale models of yeast: towards standardized evaluation and consistent omic integration
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Nielsen, Jens
2015-01-01
Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published and are curre......Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... and are currently used for metabolic engineering and elucidating biological interactions. Here we review the history of yeast's GEMs, focusing on recent developments. We study how these models are typically evaluated, using both descriptive and predictive metrics. Additionally, we analyze the different ways...
Analyzing, Modeling, and Simulation for Human Dynamics in Social Network
Directory of Open Access Journals (Sweden)
Yunpeng Xiao
2012-01-01
Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.
A Network of SCOP Hidden Markov Models and Its Analysis
Directory of Open Access Journals (Sweden)
Watson Layne T
2011-05-01
Full Text Available Abstract Background The Structural Classification of Proteins (SCOP database uses a large number of hidden Markov models (HMMs to represent families and superfamilies composed of proteins that presumably share the same evolutionary origin. However, how the HMMs are related to one another has not been examined before. Results In this work, taking into account the processes used to build the HMMs, we propose a working hypothesis to examine the relationships between HMMs and the families and superfamilies that they represent. Specifically, we perform an all-against-all HMM comparison using the HHsearch program (similar to BLAST and construct a network where the nodes are HMMs and the edges connect similar HMMs. We hypothesize that the HMMs in a connected component belong to the same family or superfamily more often than expected under a random network connection model. Results show a pattern consistent with this working hypothesis. Moreover, the HMM network possesses features distinctly different from the previously documented biological networks, exemplified by the exceptionally high clustering coefficient and the large number of connected components. Conclusions The current finding may provide guidance in devising computational methods to reduce the degree of overlaps between the HMMs representing the same superfamilies, which may in turn enable more efficient large-scale sequence searches against the database of HMMs.
Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher
2005-01-01
This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.
Consistent modelling of wind turbine noise propagation from source to receiver
DEFF Research Database (Denmark)
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong
2017-01-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine...... generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise...
Postmus, B.R.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2008-01-01
In technological applications, it is increasingly important to understand and predict interfacial phenomena. Using a self-consistent field model within the Scheutjens¿Fleer discretization scheme, we have developed a molecularly realistic model of the adsorption of poly(ethylene oxide) (PEO) onto
Modelling of multi-wall CNT devices by self-consistent analysis of multichannel transport
Energy Technology Data Exchange (ETDEWEB)
Mencarelli, D; Rozzi, T; Camilloni, C; Maccari, L; Donato, A di; Pierantoni, L [Universita Politecnica delle Marche, Via Brecce Bianche 12, 60100, Ancona (Italy)], E-mail: d.mencarelli@univpm.it
2008-04-23
We present a generalization of the self-consistent analysis of carbon nanotube (CNT) field effect transistors (FETs) to the case of multi-wall/multi-band coherent carrier transport. The contribution to charge diffusion, due to different walls and sub-bands of a multi-wall (mw) CNT is shown to be non-negligible, especially for high applied external voltages and 'large' diameters. The transmission line formalism is used in order to solve the Schroedinger equation for carrier propagation, coupled to the Poisson equation describing the spatial voltage distribution throughout the device. We provide detailed numerical results for semiconducting mw-nanotubes of different diameters and lengths, such as current-voltage characteristics and frequency responses.
Artificial Neural Network Modeling of an Inverse Fluidized Bed ...
African Journals Online (AJOL)
The application of neural networks to model a laboratory scale inverse fluidized bed reactor has been studied. A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological ...
Modeling social influence through network autocorrelation : constructing the weight matrix
Leenders, Roger Th. A. J.
Many physical and social phenomena are embedded within networks of interdependencies, the so-called 'context' of these phenomena. In network analysis, this type of process is typically modeled as a network autocorrelation model. Parameter estimates and inferences based on autocorrelation models,
A self-consistent model for the Galactic cosmic ray, antiproton and positron spectra
CERN. Geneva
2015-01-01
In this talk I will present the escape model of Galactic cosmic rays. This model explains the measured cosmic ray spectra of individual groups of nuclei from TeV to EeV energies. It predicts an early transition to extragalactic cosmic rays, in agreement with recent Auger data. The escape model also explains the soft neutrino spectrum 1/E^2.5 found by IceCube in concordance with Fermi gamma-ray data. I will show that within the same model one can explain the excess of positrons and antiprotons above 20 GeV found by PAMELA and AMS-02, the discrepancy in the slopes of the spectra of cosmic ray protons and heavier nuclei in the TeV-PeV energy range and the plateau in cosmic ray dipole anisotropy in the 2-50 TeV energy range by adding the effects of a 2 million year old nearby supernova.
Self-consistent semi-analytic models of the first stars
Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.
2018-01-01
We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter halos from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual halos) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.
Consistency Properties for Growth Model Parameters Under an Infill Asymptotics Domain
2010-09-01
80 Vita Major David T. Mills was born in San Antonio, Texas. A homeschooled student, he earned a GED in 1992, attended Palo Alto Community College ...the Parameter Estimation of Nonlinear Growth Models with Unequally Spaced Correlated Observations . PhD dissertation, Texas A&M University, College
Fields, B D; Olive, Keith A; Thomas, D; Fields, Brian D; Kainulainen, Kimmo; Olive, Keith A; Thomas, David
1996-01-01
We examine in detail how BBN theory is constrained, and what predictions it can make, when using only the most model-independent observational constraints. We avoid the uncertainties and model-dependencies that necessarily arise when solar neighborhood D and \\he3 abundances are used to infer primordial D and \\he3 via chemical and stellar evolution models. Instead, we use \\he4 and \\li7, thoroughly examining the effects of possible systematic errors in each. Via a likelihood analysis, we find near perfect agreement between BBN theory and the most model-independent data. Given this agreement, we then {\\it assume} the correctness of BBN to set limits on the single parameter of standard BBN, the baryon-to-photon ratio, and to predict the primordial D and \\he3 abundances. We also repeat our analysis including recent measurements of D/H from quasar absorption systems and find that the near perfect agreement between theory and observation of the three isotopes, D, \\he4 and \\li7 is maintained. These results have stron...
Numerical modeling is the dominant method for quantifying water flow and the transport of dissolved constituents in surface soils as well as the deeper vadose zone. While the fundamental laws that govern the mechanics of the flow processes in terms of Richards' and convection-dispersion equations a...
Kou, Jisheng
2017-12-09
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive alternative recently over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of multiple fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.
Hsieh, Chih-Sheng; Lee, Lung fei
2017-01-01
In this paper, we model network formation and network interactions under a unified framework. The key feature of our model is to allow individuals to respond to incentives stemming from interaction benefits on certain activities when they choose friends (network links), while capturing homophily in terms of unobserved characteristic variables in network formation and activities. There are two advantages of this modeling approach: first, one can evaluate whether incentives from certain interac...
Formal derivation of qualitative dynamical models from biochemical networks.
Abou-Jaoudé, Wassim; Thieffry, Denis; Feret, Jérôme
2016-11-01
As technological advances allow a better identification of cellular networks, large-scale molecular data are swiftly produced, allowing the construction of large and detailed molecular interaction maps. One approach to unravel the dynamical properties of such complex systems consists in deriving coarse-grained dynamical models from these maps, which would make the salient properties emerge. We present here a method to automatically derive such models, relying on the abstract interpretation framework to formally relate model behaviour at different levels of description. We illustrate our approach on two relevant case studies: the formation of a complex involving a protein adaptor, and a race between two competing biochemical reactions. States and traces of reaction networks are first abstracted by sampling the number of instances of chemical species within a finite set of intervals. We show that the qualitative models induced by this abstraction are too coarse to reproduce properties of interest. We then refine our approach by taking into account additional constraints, the mass invariants and the limiting resources for interval crossing, and by introducing information on the reaction kinetics. The resulting qualitative models are able to capture sophisticated properties of interest, such as a sequestration effect, which arise in the case studies and, more generally, participate in shaping the dynamics of cell signaling and regulatory networks. Our methodology offers new trade-offs between complexity and accuracy, and clarifies the implicit assumptions made in the process of qualitative modelling of biological networks. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Signalling network construction for modelling plant defence response.
Directory of Open Access Journals (Sweden)
Dragana Miljkovic
Full Text Available Plant defence signalling response against various pathogens, including viruses, is a complex phenomenon. In resistant interaction a plant cell perceives the pathogen signal, transduces it within the cell and performs a reprogramming of the cell metabolism leading to the pathogen replication arrest. This work focuses on signalling pathways crucial for the plant defence response, i.e., the salicylic acid, jasmonic acid and ethylene signal transduction pathways, in the Arabidopsis thaliana model plant. The initial signalling network topology was constructed manually by defining the representation formalism, encoding the information from public databases and literature, and composing a pathway diagram. The manually constructed network structure consists of 175 components and 387 reactions. In order to complement the network topology with possibly missing relations, a new approach to automated information extraction from biological literature was developed. This approach, named Bio3graph, allows for automated extraction of biological relations from the literature, resulting in a set of (component1, reaction, component2 triplets and composing a graph structure which can be visualised, compared to the manually constructed topology and examined by the experts. Using a plant defence response vocabulary of components and reaction types, Bio3graph was applied to a set of 9,586 relevant full text articles, resulting in 137 newly detected reactions between the components. Finally, the manually constructed topology and the new reactions were merged to form a network structure consisting of 175 components and 524 reactions. The resulting pathway diagram of plant defence signalling represents a valuable source for further computational modelling and interpretation of omics data. The developed Bio3graph approach, implemented as an executable language processing and graph visualisation workflow, is publically available at http://ropot.ijs.si/bio3graph/and can be
Metamaterial Perfect Absorber Analyzed by a Meta-cavity Model Consisting of Multilayer Metasurfaces.
Bhattarai, Khagendra; Silva, Sinhara; Song, Kun; Urbas, Augustine; Lee, Sang Jun; Ku, Zahyun; Zhou, Jiangfeng
2017-09-05
We demonstrate that the metamaterial perfect absorber behaves as a meta-cavity bounded between a resonant metasurface and a metallic thin-film reflector. The perfect absorption is achieved by the Fabry-Perot cavity resonance via multiple reflections between the "quasi-open" boundary of resonator and the "close" boundary of reflector. The characteristic features including angle independence, ultra-thin thickness and strong field localization can be well explained by this meta-cavity model. With this model, metamaterial perfect absorber can be redefined as a meta-cavity exhibiting high Q-factor, strong field enhancement and extremely high photonic density of states, thereby promising novel applications for high performance sensor, infrared photodetector and cavity quantum electrodynamics devices.
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A; Huang, Yonggang; Zhang, Yihui
2016-05-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for
Flood damage: a model for consistent, complete and multipurpose scenarios
Directory of Open Access Journals (Sweden)
S. Menoni
2016-12-01
implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.
A Mind/Brain/Matter Model Consistent with Quantum Physics and UFO phenomena
1979-01-01
realities of a second type (E.P. Wigr, ,.’ "Two Kinds of Reality," The Monist , Vol. 48, No. 2, April 1964). Note that the modei -eing c dvanced by the...biological organism, including egos of "dead" biosystems. Note also that the wave-packet reduction (collapse of the wave function) is not a relativistically ...new fourth law of logic, which is briefly described and summarized. A new photon interaction model. of quantized observable changc is also presented
Do we really use rainfall observations consistent with reality in hydrological modelling?
Ciampalini, Rossano; Follain, Stéphane; Raclot, Damien; Crabit, Armand; Pastor, Amandine; Moussa, Roger; Le Bissonnais, Yves
2017-04-01
Spatial and temporal patterns in rainfall control how water reaches soil surface and interacts with soil properties (i.e., soil wetting, infiltration, saturation). Once a hydrological event is defined by a rainfall with its spatiotemporal variability and by some environmental parameters such as soil properties (including land use, topographic and anthropic features), the evidence shows that each parameter variation produces different, specific outputs (e.g., runoff, flooding etc.). In this study, we focus on the effect of rainfall patterns because, due to the difficulty to dispose of detailed data, their influence in modelling is frequently underestimated or neglected. A rainfall event affects a catchment non uniformly, it is spatially localized and its pattern moves in space and time. The way and the time how the water reaches the soil and saturates it respect to the geometry of the catchment deeply influences soil saturation, runoff, and then sediment delivery. This research, approaching a hypothetical, simple case, aims to stimulate the debate on the reliability of the rainfall quality used in hydrological / soil erosion modelling. We test on a small catchment of the south of France (Roujan, Languedoc Roussillon) the influence of rainfall variability with the use of a HD hybrid hydrological - soil erosion model, combining a cinematic wave with the St. Venant equation and a simplified "bucket" conceptual model for ground water, able to quantify the effect of different spatiotemporal patterns of a very-high-definition synthetic rainfall. Results indicate that rainfall spatiotemporal patterns are crucial simulating an erosive event: differences between spatially uniform rainfalls, as frequently adopted in simulations, and some hypothetical rainfall patterns here applied, reveal that the outcome of a simulated event can be highly underestimated.
A consistent model for leptogenesis, dark matter and the IceCube signal
Energy Technology Data Exchange (ETDEWEB)
Fiorentin, M. Re [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Niro, V. [Departamento de Física Teórica, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Instituto de Física Teórica UAM/CSIC,Calle Nicolás Cabrera 13-15, Cantoblanco, E-28049 Madrid (Spain); Fornengo, N. [Dipartimento di Fisica, Università di Torino,via P. Giuria, 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Torino,via P. Giuria, 1, 10125 Torino (Italy)
2016-11-04
We discuss a left-right symmetric extension of the Standard Model in which the three additional right-handed neutrinos play a central role in explaining the baryon asymmetry of the Universe, the dark matter abundance and the ultra energetic signal detected by the IceCube experiment. The energy spectrum and neutrino flux measured by IceCube are ascribed to the decays of the lightest right-handed neutrino N{sub 1}, thus fixing its mass and lifetime, while the production of N{sub 1} in the primordial thermal bath occurs via a freeze-in mechanism driven by the additional SU(2){sub R} interactions. The constraints imposed by IceCube and the dark matter abundance allow nonetheless the heavier right-handed neutrinos to realize a standard type-I seesaw leptogenesis, with the B−L asymmetry dominantly produced by the next-to-lightest neutrino N{sub 2}. Further consequences and predictions of the model are that: the N{sub 1} production implies a specific power-law relation between the reheating temperature of the Universe and the vacuum expectation value of the SU(2){sub R} triplet; leptogenesis imposes a lower bound on the reheating temperature of the Universe at 7×10{sup 9} GeV. Additionally, the model requires a vanishing absolute neutrino mass scale m{sub 1}≃0.
Jha, Sanjeev Kumar
2013-01-01
A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-04-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.
Consistent negative response of US crops to high temperatures in observations and crop models
Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja
2017-01-01
High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day >30 °C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures >30 °C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation. PMID:28102202
Challenges on Probabilistic Modeling for Evolving Networks
Ding, Jianguo; Bouvry, Pascal
2013-01-01
With the emerging of new networks, such as wireless sensor networks, vehicle networks, P2P networks, cloud computing, mobile Internet, or social networks, the network dynamics and complexity expands from system design, hardware, software, protocols, structures, integration, evolution, application, even to business goals. Thus the dynamics and uncertainty are unavoidable characteristics, which come from the regular network evolution and unexpected hardware defects, unavoidable software errors,...
Georgiou, George K; Aro, Mikko; Liao, Chen-Huei; Parrila, Rauno
2016-03-01
The purpose of this study was twofold: (a) to contrast the prominent theoretical explanations of the rapid automatized naming (RAN)-reading relationship across languages varying in orthographic consistency (Chinese, English, and Finnish) and (b) to examine whether the same accounts can explain the RAN-spelling relationship. In total, 304 Grade 4 children (102 Chinese-speaking Taiwanese children, 117 English-speaking Canadian children, and 85 Finnish-speaking children) were assessed on measures of RAN, speed of processing, phonological processing, orthographic processing, reading fluency, and spelling. The results of path analysis indicated that RAN had a strong direct effect on reading fluency that was of the same size across languages and that only in English was a small proportion of its predictive variance mediated by orthographic processing. In contrast, RAN did not exert a significant direct effect on spelling, and a substantial proportion of its predictive variance was mediated by phonological processing (in Chinese and Finnish) and orthographic processing (in English). Given that RAN predicted reading fluency equally well across languages and that phonological/orthographic processing had very little to do with this relationship, we argue that the reason why RAN is related to reading fluency should be sought in domain-general factors such as serial processing and articulation. Copyright © 2015 Elsevier Inc. All rights reserved.
Cross-language consistency of the Comprehensive Assessment of Psychopathic Personality (CAPP) model.
Hoff, Helge Andreas; Rypdal, Knut; Hystad, Sigurd W; Hart, Stephen D; Mykletun, Arnstein; Kreis, Mette K F; Cooke, David J
2014-10-01
This study is the first to our knowledge to examine the cross-language consistency across the original version of the Comprehensive Assessment of Psychopathy (CAPP) and a translated version. The CAPP is a lexically based construct map of psychopathy comprising 33 symptoms from 6 broad domains of personality functioning. English-language CAPP prototypicality ratings from 124 mental health workers were compared with ratings from 211 Norwegian mental health workers using the Norwegian translation. High agreement was found across languages in regard to which symptoms where perceived as central to psychopathy or not. Multigroup confirmatory factor analyses (MGCFA) indicated that, overall, the symptoms had similar associations with the 6 proposed underlying dimensions across the 2 language versions. Finally, in general, the probability for a given prototypicality rating on an individual symptom was similar across language version samples at the same level of the underlying trait, as analyzed with Item Response Theory (IRT). Together these findings lend support to the validity of the construct of psychopathy, the validity of the CAPP as a concept map of psychopathy, and the validity of the Norwegian translation of the CAPP. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Dannberg, J.; Sobolev, S. V.
2013-12-01
According to widely accepted models, plumes ascend from the core-mantle boundary and cause massive melting when they reach the base of the lithosphere. Most of these models consider plumes as purely thermal and predict flattening of the plume head to a disk-like structure, thin plume tails with a radius on the scale of 100 km and kilometer-scale topographic uplift before and during the eruption of flood basalts. However, several paleogeographic and paleotectonic field studies indicate significantly smaller surface uplift during the development of many LIPs, and seismic imaging reveals thicker plume tails as well as a more complex plume structure in the upper mantle including broad low-velocity anomalies up to 400 km depth and elongated low-velocity fingers. Moreover, geochemical data indicate a plume composition that differs from that of the average mantle and recent geodynamic models of plumes in the upper mantle show that plumes containing a large fraction of eclogite and therefore having very low buoyancy can explain the observations much better. Nevertheless, the question remains how such a low-buoyancy plume can rise through the whole mantle and how this ascent affects its dynamics. We perform numerical experiments in 2D axisymmetric geometry to systematically study the dynamics of the plume ascent as well as 2D and 3D models with prescribed velocity at the upper boundary to investigate the interaction between plume- and plate-driven flow. For that purpose, we use modified versions of the finite-element codes Citcom and Aspect. Our models employ complex material properties incorporating phase transitions with the accompanying density changes, Clapeyron slopes and latent heat effects for the peridotite and eclogite phase, mantle compressibility and a highly temperature- and depth-dependent viscosity. We study under which conditions (excess temperature, plume volume and eclogite content) thermo-chemical plumes can ascend through the whole mantle and what
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...
Aeronautical telecommunications network advances, challenges, and modeling
Musa, Sarhan M
2015-01-01
Addresses the Challenges of Modern-Day Air Traffic Air traffic control (ATC) directs aircraft in the sky and on the ground to safety, while the Aeronautical Telecommunications Network (ATN) comprises all systems and phases that assist in aircraft departure and landing. The Aeronautical Telecommunications Network: Advances, Challenges, and Modeling focuses on the development of ATN and examines the role of the various systems that link aircraft with the ground. The book places special emphasis on ATC-introducing the modern ATC system from the perspective of the user and the developer-and provides a thorough understanding of the operating mechanism of the ATC system. It discusses the evolution of ATC, explaining its structure and how it works; includes design examples; and describes all subsystems of the ATC system. In addition, the book covers relevant tools, techniques, protocols, and architectures in ATN, including MIPv6, air traffic control (ATC), security of air traffic management (ATM), very-high-frequenc...
Neural Network Program Package for Prosody Modeling
Directory of Open Access Journals (Sweden)
J. Santarius
2004-04-01
Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].
Khan, A.; Belluzzi, L.; Landi Degl'Innocenti, E.; Fineschi, S.; Romoli, M.
2011-05-01
Context. The presence and importance of the coronal magnetic field is illustrated by a wide range of phenomena, such as the abnormally high temperatures of the coronal plasma, the existence of a slow and fast solar wind, the triggering of explosive events such as flares and CMEs. Aims: We investigate the possibility of using the Hanle effect to diagnose the coronal magnetic field by analysing its influence on the linear polarisation, i.e. the rotation of the plane of polarisation and depolarisation. Methods: We analyse the polarisation characteristics of the first three lines of the hydrogen Lyman-series using an axisymmetric, self-consistent, minimum-corona MHD model with relatively low values of the magnetic field (a few Gauss). Results: We find that the Hanle effect in the above-mentioned lines indeed seems to be a valuable tool for analysing the coronal magnetic field. However, great care must be taken when analysing the spectropolarimetry of the Lα line, given that a non-radial solar wind and active regions on the solar disk can mimic the effects of the magnetic field, and, in some cases, even mask them. Similar drawbacks are not found for the Lβ and Lγ lines because they are more sensitive to the magnetic field. We also briefly consider the instrumental requirements needed to perform polarimetric observations for diagnosing the coronal magnetic fields. Conclusions: The combined analysis of the three aforementioned lines could provide an important step towards better constrainting the value of solar coronal magnetic fields.
A self-consistent 3D model of fluctuations in the helium-ionizing background
Davies, Frederick B.; Furlanetto, Steven R.; Dixon, Keri L.
2017-03-01
Large variations in the effective optical depth of the He II Lyα forest have been observed at z ≳ 2.7, but the physical nature of these variations is uncertain: either the Universe is still undergoing the process of He II reionization, or the Universe is highly ionized but the He II-ionizing background fluctuates significantly on large scales. In an effort to build upon our understanding of the latter scenario, we present a novel model for the evolution of ionizing background fluctuations. Previous models have assumed the mean free path of ionizing photons to be spatially uniform, ignoring the dependence of that scale on the local ionization state of the intergalactic medium (IGM). This assumption is reasonable when the mean free path is large compared to the average distance between the primary sources of He II-ionizing photons, ≳ L⋆ quasars. However, when this is no longer the case, the background fluctuations become more severe, and an accurate description of the average propagation of ionizing photons through the IGM requires additionally accounting for the fluctuations in opacity. We demonstrate the importance of this effect by constructing 3D semi-analytic models of the helium-ionizing background from z = 2.5-3.5 that explicitly include a spatially varying mean free path of ionizing photons. The resulting distribution of effective optical depths at large scales in the He II Lyα forest is very similar to the latest observations with HST/COS at 2.5 ≲ z ≲ 3.5.
Towards an evolutionary model of transcription networks.
Directory of Open Access Journals (Sweden)
Dan Xie
2011-06-01
Full Text Available DNA evolution models made invaluable contributions to comparative genomics, although it seemed formidable to include non-genomic features into these models. In order to build an evolutionary model of transcription networks (TNs, we had to forfeit the substitution model used in DNA evolution and to start from modeling the evolution of the regulatory relationships. We present a quantitative evolutionary model of TNs, subjecting the phylogenetic distance and the evolutionary changes of cis-regulatory sequence, gene expression and network structure to one probabilistic framework. Using the genome sequences and gene expression data from multiple species, this model can predict regulatory relationships between a transcription factor (TF and its target genes in all species, and thus identify TN re-wiring events. Applying this model to analyze the pre-implantation development of three mammalian species, we identified the conserved and re-wired components of the TNs downstream to a set of TFs including Oct4, Gata3/4/6, cMyc and nMyc. Evolutionary events on the DNA sequence that led to turnover of TF binding sites were identified, including a birth of an Oct4 binding site by a 2nt deletion. In contrast to recent reports of large interspecies differences of TF binding sites and gene expression patterns, the interspecies difference in TF-target relationship is much smaller. The data showed increasing conservation levels from genomic sequences to TF-DNA interaction, gene expression, TN, and finally to morphology, suggesting that evolutionary changes are larger at molecular levels and smaller at functional levels. The data also showed that evolutionarily older TFs are more likely to have conserved target genes, whereas younger TFs tend to have larger re-wiring rates.
Contributions and challenges for network models in cognitive neuroscience.
Sporns, Olaf
2014-05-01
The confluence of new approaches in recording patterns of brain connectivity and quantitative analytic tools from network science has opened new avenues toward understanding the organization and function of brain networks. Descriptive network models of brain structural and functional connectivity have made several important contributions; for example, in the mapping of putative network hubs and network communities. Building on the importance of anatomical and functional interactions, network models have provided insight into the basic structures and mechanisms that enable integrative neural processes. Network models have also been instrumental in understanding the role of structural brain networks in generating spatially and temporally organized brain activity. Despite these contributions, network models are subject to limitations in methodology and interpretation, and they face many challenges as brain connectivity data sets continue to increase in detail and complexity.
Modeling of regional warehouse network generation
Directory of Open Access Journals (Sweden)
Popov Pavel Vladimirovich
2016-08-01
Full Text Available One of the factors that has a significant impact on the socio-economic development of the Russian Federation’s regions is the logistics infrastructure. It provides integrated transportation and distribution service of material flows. One of the main elements of logistics infrastructure is a storage infrastructure, which includes distribution center, distribution-and-sortout and sortout warehouses. It is the most expedient to place distribution center in the vicinity of the regional center. One of the tasks of the distribution network creation within the regions of the Russian Federation is to determine the location, capacity and number of stores. When determining regional network location of general purpose warehouses methodological approaches to solving the problems of location of production and non-production can be used which depend on various economic factors. The mathematical models for solving relevant problems are the deployment models. However, the existing models focus on the dimensionless power storage. The purpose of the given work is to develop a model to determine the optimal location of general-purpose warehouses on the Russian Federation area. At the first stage of the work, the authors assess the main economic indicators influencing the choice of the location of general purpose warehouses. An algorithm for solving the first stage, based on ABC, discriminant and cluster analysis were proposed by the authors in earlier papers. At the second stage the specific locations of general purpose warehouses and their power is chosen to provide the cost minimization for the construction and subsequent maintenance of warehouses and transportation heterogeneous products. In order to solve this problem the authors developed a mathematical model that takes into account the possibility of delivery in heterogeneous goods from suppliers and manufacturers in the distribution and storage sorting with specified set of capacities. The model allows
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
Directory of Open Access Journals (Sweden)
Laura Louise Scott
2017-12-01
Full Text Available Although cyanobacterial β-N-methylamino-l-alanine (BMAA has been implicated in the development of Alzheimer’s Disease (AD, Parkinson’s Disease (PD and Amyotrophic Lateral Sclerosis (ALS, no BMAA animal model has reproduced all the neuropathology typically associated with these neurodegenerative diseases. We present here a neonatal BMAA model that causes β-amyloid deposition, neurofibrillary tangles of hyper-phosphorylated tau, TDP-43 inclusions, Lewy bodies, microbleeds and microgliosis as well as severe neuronal loss in the hippocampus, striatum, substantia nigra pars compacta, and ventral horn of the spinal cord in rats following a single BMAA exposure. We also report here that BMAA exposure on particularly PND3, but also PND4 and 5, the critical period of neurogenesis in the rodent brain, is substantially more toxic than exposure to BMAA on G14, PND6, 7 and 10 which suggests that BMAA could potentially interfere with neonatal neurogenesis in rats. The observed selective toxicity of BMAA during neurogenesis and, in particular, the observed pattern of neuronal loss observed in BMAA-exposed rats suggest that BMAA elicits its effect by altering dopamine and/or serotonin signaling in rats.
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Optimizing neural network models: motivation and case studies
Harp, S A; T. Samad
2012-01-01
Practical successes have been achieved with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally rem...
Comprehensive Weighted Clique Degree Ranking Algorithms and Evolutionary Model of Complex Network
Directory of Open Access Journals (Sweden)
Xu Jie
2016-01-01
Full Text Available This paper analyses the degree ranking (DR algorithm, and proposes a new comprehensive weighted clique degree ranking (CWCDR algorithms for ranking importance of nodes in complex network. Simulation results show that CWCDR algorithms not only can overcome the limitation of degree ranking algorithm, but also can find important nodes in complex networks more precisely and effectively. To the shortage of small-world model and BA model, this paper proposes an evolutionary model of complex network based on CWCDR algorithms, named CWCDR model. Simulation results show that the CWCDR model accords with power-law distribution. And compare with the BA model, this model has better average shortest path length, and clustering coefficient. Therefore, the CWCDR model is more consistent with the real network.
Requirements for data integration platforms in biomedical research networks: a reference model.
Ganzinger, Matthias; Knaup, Petra
2015-01-01
Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.
Inferring gene regression networks with model trees
Directory of Open Access Journals (Sweden)
Aguilar-Ruiz Jesus S
2010-10-01
Full Text Available Abstract Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear
Two stage neural network modelling for robust model predictive control.
Patan, Krzysztof
2017-11-02
The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Gustafsson, Leif; Sternad, Mikael
2007-10-01
Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.
Marsman, M; Borsboom, D; Kruis, J; Epskamp, S; van Bork, R; Waldorp, L J; Maas, H L J van der; Maris, G
2017-11-07
In recent years, network models have been proposed as an alternative representation of psychometric constructs such as depression. In such models, the covariance between observables (e.g., symptoms like depressed mood, feelings of worthlessness, and guilt) is explained in terms of a pattern of causal interactions between these observables, which contrasts with classical interpretations in which the observables are conceptualized as the effects of a reflective latent variable. However, few investigations have been directed at the question how these different models relate to each other. To shed light on this issue, the current paper explores the relation between one of the most important network models-the Ising model from physics-and one of the most important latent variable models-the Item Response Theory (IRT) model from psychometrics. The Ising model describes the interaction between states of particles that are connected in a network, whereas the IRT model describes the probability distribution associated with item responses in a psychometric test as a function of a latent variable. Despite the divergent backgrounds of the models, we show a broad equivalence between them and also illustrate several opportunities that arise from this connection.
Complementarity of DM searches in a consistent simplified model: the case of Z{sup ′}
Energy Technology Data Exchange (ETDEWEB)
Jacques, Thomas [SISSA and INFN,via Bonomea 265, 34136 Trieste (Italy); Katz, Andrey [Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Morgante, Enrico; Racco, Davide [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Rameez, Mohamed [Département de Physique Nucléaire et Corpusculaire,Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Riotto, Antonio [Département de Physique Théorique and Center for Astroparticle Physics (CAP),Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland)
2016-10-14
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC, direct and indirect detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy Z{sup ′} mediates the interactions between the SM and the DM. We find that for heavy dark matter indirect detection provides the strongest bounds on this scenario, while IceCube bounds are typically stronger than those from direct detection. The LHC constraints are dominant for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun and the Galactic Center are either bb̄ or tt̄, while the heavy DM annihilation is completely dominated by Zh channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast IceCube constraints to allow proper comparison with constraints from direct and indirect detection experiments and LHC exclusions.
Existence of a Consistent Quantum Gravity Model from Minimum Microscopic Information
Mandrin, P. A.
2014-12-01
It is shown that a quantum gravity formulation exists on the basis of quantum number conservation, the laws of thermodynamics, unspecific interactions, and locally maximizing the ratio of resulting degrees of freedom per imposed degree of freedom of the theory. The First Law of thermodynamics is evaluated by imposing boundary conditions to the theory. These boundary conditions determine the details of the complex world structure. No explicite microscopic quantum structure is required, and thus no ambiguity arises on how to construct the model. Although no dynamical computations of quantum systems are possible on this basis, all well established physics may be recovered, and all measurable quantities may be computed. The recovery of physical laws is shown by extremizing the entropy, which means varying the action on the bulk and boundary of small volumes of curved space-time. It is sketched how Quantum Field Theory (QFT) and General Relativity (GR) are recovered with no further assumptions except for imposing the dimension of a second derivative of the metric on the gravitational field equations. The new concepts are 1. the abstract organization of statistical quantum states, allowing for the possibility of absent quantum microstructure, 2. the optimization of the locally resulting degrees of freedom per imposed degree of freedom of the theory, allowing for the reconstruction of the spacetime dimensions, 3. the reconstruction of physical and geometric quantities by means of stringent mathematical or physical justifications, 4. the fully general recovery of GR by quasi-local variation methods applied on small portions of spacetime.
Complementarity of DM Searches in a Consistent Simplified Model: the Case of Z'
Jacques, Thomas; Morgante, Enrico; Racco, Davide; Rameez, Mohamed; Riotto, Antonio
2016-01-01
We analyze the constraints from direct and indirect detection on fermionic Majorana Dark Matter (DM). Because the interaction with the Standard Model (SM) particles is spin-dependent, a priori the constraints that one gets from neutrino telescopes, the LHC and direct detection experiments are comparable. We study the complementarity of these searches in a particular example, in which a heavy $Z'$ mediates the interactions between the SM and the DM. We find that in most cases IceCube provides the strongest bounds on this scenario, while the LHC constraints are only meaningful for smaller dark matter masses. These light masses are less motivated by thermal relic abundance considerations. We show that the dominant annihilation channels of the light DM in the Sun are either $b \\bar b$ or $t \\bar t$, while the heavy DM annihilation is completely dominated by $Zh$ channel. The latter produces a hard neutrino spectrum which has not been previously analyzed. We study the neutrino spectrum yielded by DM and recast Ice...
Neural Networks For Electrohydrodynamic Effect Modelling
Directory of Open Access Journals (Sweden)
Wiesław Wajs
2004-01-01
Full Text Available This paper presents currently achieved results concerning methods of electrohydrodynamiceffect used in geophysics simulated with feedforward networks trained with backpropagation algorithm, radial basis function networks and generalized regression networks.
A network-oriented business modeling environment
Bisconti, Cristian; Storelli, Davide; Totaro, Salvatore; Arigliano, Francesco; Savarino, Vincenzo; Vicari, Claudia
The development of formal models related to the organizational aspects of an enterprise is fundamental when these aspects must be re-engineered and digitalized, especially when the enterprise is involved in the dynamics and value flows of a business network. Business modeling provides an opportunity to synthesize and make business processes, business rules and the structural aspects of an organization explicit, allowing business managers to control their complexity and guide an enterprise through effective decisional and strategic activities. This chapter discusses the main results of the TEKNE project in terms of software components that enable enterprises to configure, store, search and share models of any aspects of their business while leveraging standard and business-oriented technologies and languages to bridge the gap between the world of business people and IT experts and to foster effective business-to-business collaborations.
Geotechnology-Based Modeling to Optimize Conservation of Forest Network in Urban Area
Teng, Mingjun; Zhou, Zhixiang; Wang, Pengcheng; Xiao, Wenfa; Wu, Changguang; Lord, Elizabeth
2016-03-01
Forest network development in urban areas faces the challenge from forest fragmentation, human-induced disturbances, and scarce land resources. Here, we proposed a geotechnology-based modeling to optimize conservation of forest network by a case study of Wuhan, China. The potential forest network and their priorities were assessed using an improved least-cost path model and potential utilization efficiency estimation. The modeling process consists of four steps: (i) developing species assemblages, (ii) identifying core forest patches, (iii) identifying potential linkages among core forest patches, and (iv) demarcating forest networks. As a result, three species assemblages, including mammals, pheasants, and other birds, were identified as the conservation targets of urban forest network (UFN) in Wuhan, China. Based on the geotechnology-based model, a forest network proposal was proposed to fulfill the connectivity requirements of selected species assemblages. The proposal consists of seven forest networks at three levels of connectivity, named ideal networks, backbone networks, and comprehensive network. The action priorities of UFN plans were suggested to optimize forest network in the study area. Additionally, a total of 45 forest patches with important conservation significance were identified as prioritized stepping-stone patches in the forest network development. Urban forest conserve was also suggested for preserving woodlands with priority conservation significance. The presented geotechnology-based modeling is fit for planning and optimizing UFNs, because of the inclusion of the stepping-stone effects, human-induced pressures, and priorities. The framework can also be applied to other areas after a sensitivity test of the model and the modification of the parameters to fit the local environment.
Geotechnology-Based Modeling to Optimize Conservation of Forest Network in Urban Area.
Teng, Mingjun; Zhou, Zhixiang; Wang, Pengcheng; Xiao, Wenfa; Wu, Changguang; Lord, Elizabeth
2016-03-01
Forest network development in urban areas faces the challenge from forest fragmentation, human-induced disturbances, and scarce land resources. Here, we proposed a geotechnology-based modeling to optimize conservation of forest network by a case study of Wuhan, China. The potential forest network and their priorities were assessed using an improved least-cost path model and potential utilization efficiency estimation. The modeling process consists of four steps: (i) developing species assemblages, (ii) identifying core forest patches, (iii) identifying potential linkages among core forest patches, and (iv) demarcating forest networks. As a result, three species assemblages, including mammals, pheasants, and other birds, were identified as the conservation targets of urban forest network (UFN) in Wuhan, China. Based on the geotechnology-based model, a forest network proposal was proposed to fulfill the connectivity requirements of selected species assemblages. The proposal consists of seven forest networks at three levels of connectivity, named ideal networks, backbone networks, and comprehensive network. The action priorities of UFN plans were suggested to optimize forest network in the study area. Additionally, a total of 45 forest patches with important conservation significance were identified as prioritized stepping-stone patches in the forest network development. Urban forest conserve was also suggested for preserving woodlands with priority conservation significance. The presented geotechnology-based modeling is fit for planning and optimizing UFNs, because of the inclusion of the stepping-stone effects, human-induced pressures, and priorities. The framework can also be applied to other areas after a sensitivity test of the model and the modification of the parameters to fit the local environment.
Compartmentalization analysis using discrete fracture network models
Energy Technology Data Exchange (ETDEWEB)
La Pointe, P.R.; Eiben, T.; Dershowitz, W. [Golder Associates, Redmond, VA (United States); Wadleigh, E. [Marathon Oil Co., Midland, TX (United States)
1997-08-01
This paper illustrates how Discrete Fracture Network (DFN) technology can serve as a basis for the calculation of reservoir engineering parameters for the development of fractured reservoirs. It describes the development of quantitative techniques for defining the geometry and volume of structurally controlled compartments. These techniques are based on a combination of stochastic geometry, computational geometry, and graph the theory. The parameters addressed are compartment size, matrix block size and tributary drainage volume. The concept of DFN models is explained and methodologies to compute these parameters are demonstrated.
Some queuing network models of computer systems
Herndon, E. S.
1980-01-01
Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.
Networks model of the East Turkistan terrorism
Li, Ben-xian; Zhu, Jun-fang; Wang, Shun-guo
2015-02-01
The presence of the East Turkistan terrorist network in China can be traced back to the rebellions on the BAREN region in Xinjiang in April 1990. This article intends to research the East Turkistan networks in China and offer a panoramic view. The events, terrorists and their relationship are described using matrices. Then social network analysis is adopted to reveal the network type and the network structure characteristics. We also find the crucial terrorist leader. Ultimately, some results show that the East Turkistan network has big hub nodes and small shortest path, and that the network follows a pattern of small world network with hierarchical structure.
Fundamentals of complex networks models, structures and dynamics
Chen, Guanrong; Li, Xiang
2014-01-01
Complex networks such as the Internet, WWW, transportationnetworks, power grids, biological neural networks, and scientificcooperation networks of all kinds provide challenges for futuretechnological development. In particular, advanced societies havebecome dependent on large infrastructural networks to an extentbeyond our capability to plan (modeling) and to operate (control).The recent spate of collapses in power grids and ongoing virusattacks on the Internet illustrate the need for knowledge aboutmodeling, analysis of behaviors, optimized planning and performancecontrol in such networks. F
Stochastic simulation of HIV population dynamics through complex network modelling
Sloot, P. M. A.; Ivanov, S. V.; Boukhanovsky, A. V.; van de Vijver, D. A. M. C.; Boucher, C. A. B.
We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and
A Search Model with a Quasi-Network
DEFF Research Database (Denmark)
Ejarque, Joao Miguel
This paper adds a quasi-network to a search model of the labor market. Fitting the model to an average unemployment rate and to other moments in the data implies the presence of the network is not noticeable in the basic properties of the unemployment and job finding rates. However, the network c...
Stochastic simulation of HIV population dynamics through complex network modelling
Sloot, P.M.A.; Ivanov, S.V.; Boukhanovsky, A.V.; van de Vijver, D.A.M.C.; Boucher, C.A.B.
2008-01-01
We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and
A consistent two-mutation model of bone cancer for two data sets of radium-injected beagles
Energy Technology Data Exchange (ETDEWEB)
Bijwaard, H.; Brugmans, M.J.P.; Leenhouts, H.P. [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands)
2002-09-01
A two-mutation carcinogenesis model has been applied to model osteosarcoma incidence in two data sets of beagles injected with {sup 226}Ra. Taking age-specific retention into account, the following results have been obtained: (1) a consistent and well-fitting solution for all age and dose groups, (2) mutation rates that are linearly dependent on dose rate, with an exponential decrease for the second mutation at high dose rates, (3) a linear-quadratic dose-effect relationship, which indicates that care should be taken when extrapolating linearly, (4) highest cumulative incidences for injection at young adult age, and highest risks for injection doses of a few kBq kg{sup -1} at these ages, and (5) when scaled appropriately, the beagle model compares fairly well with a description for radium dial painters, suggesting that a consistent model description of bone cancer induction in beagles and humans may be possible. (author)
Directory of Open Access Journals (Sweden)
J. Callies
2012-01-01
Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.
This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.
Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.
2010-01-01
the surge , the sway , and the heave motions, respectively. Similarly, the angular velocity vector U is U= Ux x̂+ Uy ŷ+ Uz ẑ. In naval engineering, Ux... damping coefficients used in heave and pitch motions. The surface integration used in this theory complicates its computational implementation. The...Kuang, W. 2008 Modeling nonlinear roll damping with a self-consistent, strongly nonlinear ship motion model. J. Mar. Sci. Technol. 13, 127–137. (doi
Kobayasi, M; Nakatsukasa, T; Matsuo, M
2003-01-01
The adiabatic self-consistent collective coordinate method is applied to an exactly solvable multi-O(4) model that is designed to describe nuclear shape coexistence phenomena. The collective mass and dynamics of large amplitude collective motion in this model system are analyzed, and it is shown that the method yields a faithful description of tunneling motion through a barrier between the prolate and oblate local minima in the collective potential. The emergence of the doublet pattern is clearly described. (author)
DEFF Research Database (Denmark)
Silvennoinen, Annestiina; Terasvirta, Timo
A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations are deterministi......A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...... are deterministically time-varying. Parameters of the model are estimated jointly using maximum likelihood. Consistency and asymptotic normality of maximum likelihood estimators is proved. Numerical aspects of the estimation algorithm are discussed. A bivariate empirical example is provided....
VEPCO network model reconciliation of LANL and MZA model data
Energy Technology Data Exchange (ETDEWEB)
NONE
1992-12-15
The LANL DC load flow model of the VEPCO transmission network shows 210 more substations than the AC load flow model produced by MZA utility Consultants. MZA was requested to determine the source of the difference. The AC load flow model used for this study utilizes 2 standard network algorithms (Decoupled or Newton). The solution time of each is affected by the number of substations. The more substations included, the longer the model will take to solve. In addition, the ability of the algorithms to converge to a solution is affected by line loadings and characteristics. Convergence is inhibited by numerous lightly loaded and electrically short lines. The MZA model reduces the total substations to 343 by creating equivalent loads and generation. Most of the omitted substations are lightly loaded and rated at 115 kV. The MZA model includes 16 substations not included in the LANL model. These represent new generation including Non-Utility Generator (NUG) sites, additional substations and an intertie (Wake, to CP and L). This report also contains data from the Italian State AC power flow model and the Duke Power Company AC flow model.
A Model of Genetic Variation in Human Social Networks
Fowler, James H; Christakis, Nicholas A
2008-01-01
Social networks influence the evolution of cooperation and they exhibit strikingly systematic patterns across a wide range of human contexts. Both of these facts suggest that variation in the topological attributes of human social networks might have a genetic basis. While genetic variation accounts for a significant portion of the variation in many complex social behaviors, the heritability of egocentric social network attributes is unknown. Here we show that three of these attributes (in-degree, transitivity, and centrality) are heritable. We then develop a "mirror network" method to test extant network models and show that none accounts for observed genetic variation in human social networks. We propose an alternative "attract and introduce" model that generates significant heritability as well as other important network features, and we show that this model with two simple forms of heterogeneity is well suited to the modeling of real social networks in humans. These results suggest that natural selection ...
Frank, Laurence Emmanuelle
2006-01-01
Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor
PageRank model of opinion formation on Ulam networks
Chakhmakhchyan, L.; Shepelyansky, D.
2013-12-01
We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.
A nonaffine network model for elastomers undergoing finite deformations
Davidson, Jacob D.; Goulbourne, N. C.
2013-08-01
In this work, we construct a new physics-based model of rubber elasticity to capture the strain softening, strain hardening, and deformation-state dependent response of rubber materials undergoing finite deformations. This model is unique in its ability to capture large-stretch mechanical behavior with parameters that are connected to the polymer chemistry and can also be easily identified with the important characteristics of the macroscopic stress-stretch response. The microscopic picture consists of two components: a crosslinked network of Langevin chains and an entangled network with chains confined to a nonaffine tube. These represent, respectively, changes in entropy due to thermally averaged chain conformations and changes in entropy due to the magnitude of these conformational fluctuations. A simple analytical form for the strain energy density is obtained using Rubinstein and Panyukov's single-chain description of network behavior. The model only depends on three parameters that together define the initial modulus, extent of strain softening, and the onset of strain hardening. Fits to large stretch data for natural rubber, silicone rubber, VHB 4905 (polyacrylate rubber), and b186 rubber (a carbon black-filled rubber) are presented, and a comparison is made with other similar constitutive models of large-stretch rubber elasticity. We demonstrate that the proposed model provides a complete description of elastomers undergoing large deformations for different applied loading configurations. Moreover, since the strain energy is obtained using a clear set of physical assumptions, this model may be tested and used to interpret the results of computer simulation and experiments on polymers of known microscopic structure.
A scale-free neural network for modelling neurogenesis
Perotti, Juan I.; Tamarit, Francisco A.; Cannas, Sergio A.
2006-11-01
In this work we introduce a neural network model for associative memory based on a diluted Hopfield model, which grows through a neurogenesis algorithm that guarantees that the final network is a small-world and scale-free one. We also analyze the storage capacity of the network and prove that its performance is larger than that measured in a randomly dilute network with the same connectivity.
A graph model for opportunistic network coding
Sorour, Sameh
2015-08-12
© 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.
Marketing communications model for innovation networks
Directory of Open Access Journals (Sweden)
Tiago João Freitas Correia
2015-10-01
Full Text Available Innovation is an increasingly relevant concept for the success of any organization, but it also represents a set of internal and external considerations, barriers and challenges to overcome. Along the concept of innovation, new paradigms emerge such as open innovation and co-creation that are simultaneously innovation modifiers and intensifiers in organizations, promoting organizational openness and stakeholder integration within the value creation process. Innovation networks composed by a multiplicity of agents in co-creative work perform as innovation mechanisms to face the increasingly complexity of products, services and markets. Technology, especially the Internet, is an enabler of all process among organizations supported by co-creative platforms for innovation. The definition of marketing communication strategies that promote motivation and involvement of all stakeholders in synergic creation and external promotion is the central aspect of this research. The implementation of the projects is performed by participative workshops with stakeholders from Madan Parque through IDEAS(REVOLUTION methodology and the operational model LinkUp parameterized for the project. The project is divided into the first part, the theoretical framework, and the second part where a model is developed for the marketing communication strategies that appeal to the Madan Parque case study. Keywords: Marketing Communication; Open Innovation, Technology; Innovation Networks; Incubator; Co-Creation.
Determining Application Runtimes Using Queueing Network Modeling
Energy Technology Data Exchange (ETDEWEB)
Elliott, Michael L. [Univ. of San Francisco, CA (United States)
2006-12-14
Determination of application times-to-solution for large-scale clustered computers continues to be a difficult problem in high-end computing, which will only become more challenging as multi-core consumer machines become more prevalent in the market. Both researchers and consumers of these multi-core systems desire reasonable estimates of how long their programs will take to run (time-to-solution, or TTS), and how many resources will be consumed in the execution. Currently there are few methods of determining these values, and those that do exist are either overly simplistic in their assumptions or require great amounts of effort to parameterize and understand. One previously untried method is queuing network modeling (QNM), which is easy to parameterize and solve, and produces results that typically fall within 10 to 30% of the actual TTS for our test cases. Using characteristics of the computer network (bandwidth, latency) and communication patterns (number of messages, message length, time spent in communication), the QNM model of the NAS-PB CG application was applied to MCR and ALC, supercomputers at LLNL, and the Keck Cluster at USF, with average errors of 2.41%, 3.61%, and -10.73%, respectively, compared to the actual TTS observed. While additional work is necessary to improve the predictive capabilities of QNM, current results show that QNM has a great deal of promise for determining application TTS for multi-processor computer systems.
Modeling management of research and education networks
Galagan, D.V.
2004-01-01
Computer networks and their services have become an essential part of research and education. Nowadays every modern R&E institution must have a computer network and provide network services to its students and staff. In addition to its internal computer network, every R&E institution must have a
Modeling stochasticity in biochemical reaction networks
Constantino, P. H.; Vlysidis, M.; Smadbeck, P.; Kaznessis, Y. N.
2016-03-01
Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts.
Modelling of A Trust and Reputation Model in Wireless Networks
Directory of Open Access Journals (Sweden)
Saurabh Mishra
2015-09-01
Full Text Available Security is the major challenge for Wireless Sensor Networks (WSNs. The sensor nodes are deployed in non controlled environment, facing the danger of information leakage, adversary attacks and other threats. Trust and Reputation models are solutions for this problem and to identify malicious, selfish and compromised nodes. This paper aims to evaluate varying collusion effect with respect to static (SW, dynamic (DW, static with collusion (SWC, dynamic with collusion (DWC and oscillating wireless sensor networks to derive the joint resultant of Eigen Trust Model. An attempt has been made for the same by comparing aforementioned networks that are purely dedicated to protect the WSNs from adversary attacks and maintain the security issues. The comparison has been made with respect to accuracy and path length and founded that, collusion for wireless sensor networks seems intractable with the static and dynamic WSNs when varied with specified number of fraudulent nodes in the scenario. Additionally, it consumes more energy and resources in oscillating and collusive environments.
Multiplicative Attribute Graph Model of Real-World Networks
Energy Technology Data Exchange (ETDEWEB)
Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)
2010-10-20
Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.
Khazanov, George V.
2006-01-01
The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. To describe the RC evolution itself this study uses the ring current-atmosphere interaction model (RAM). RAM solves the gyration and bounce-averaged Boltzmann-Landau equation inside of geosynchronous orbit. Originally developed at the University of Michigan, there are now several branches of this model currently in use as describe by Liemohn namely those at NASA Goddard Space Flight Center This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at GEM meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Homologous Basal Ganglia Network Models in Physiological and Parkinsonian Conditions
Directory of Open Access Journals (Sweden)
Jyotika Bahuguna
2017-08-01
Full Text Available The classical model of basal ganglia has been refined in recent years with discoveries of subpopulations within a nucleus and previously unknown projections. One such discovery is the presence of subpopulations of arkypallidal and prototypical neurons in external globus pallidus, which was previously considered to be a primarily homogeneous nucleus. Developing a computational model of these multiple interconnected nuclei is challenging, because the strengths of the connections are largely unknown. We therefore use a genetic algorithm to search for the unknown connectivity parameters in a firing rate model. We apply a binary cost function derived from empirical firing rate and phase relationship data for the physiological and Parkinsonian conditions. Our approach generates ensembles of over 1,000 configurations, or homologies, for each condition, with broad distributions for many of the parameter values and overlap between the two conditions. However, the resulting effective weights of connections from or to prototypical and arkypallidal neurons are consistent with the experimental data. We investigate the significance of the weight variability by manipulating the parameters individually and cumulatively, and conclude that the correlation observed between the parameters is necessary for generating the dynamics of the two conditions. We then investigate the response of the networks to a transient cortical stimulus, and demonstrate that networks classified as physiological effectively suppress activity in the internal globus pallidus, and are not susceptible to oscillations, whereas parkinsonian networks show the opposite tendency. Thus, we conclude that the rates and phase relationships observed in the globus pallidus are predictive of experimentally observed higher level dynamical features of the physiological and parkinsonian basal ganglia, and that the multiplicity of solutions generated by our method may well be indicative of a natural
Modeling Switched Circuit Network Systems Using PLANITU
2005-12-01
seen from ITU-D (from [10]) .........33 Figure 5.1. Funcional block diagram showing FcMetro data processing and results generated (from [1...seen from Figure 2.1 input variables consist of a general traffic forecast, traffic patterns, technical constraints, and cost models. 4 Figure 2.1...scenario • Design, dimensioning, location and costing • Optimization • Sensitivity analysis to uncertain variables • Plan selection and
CNMO: Towards the Construction of a Communication Network Modelling Ontology
Rahman, Muhammad Azizur; Pakstas, Algirdas; Wang, Frank Zhigang
Ontologies that explicitly identify objects, properties, and relationships in specific domains are essential for collaboration that involves sharing of data, knowledge or resources. A communications network modelling ontology (CNMO) has been designed to represent a network model as well as aspects related to its development and actual network operation. Network nodes/sites, link, traffic sources, protocols as well as aspects of the modeling/simulation scenario and operational aspects are defined with their formal representation. A CNMO may be beneficial for various network design/simulation/research communities due to the uniform representation of network models. This ontology is designed using terminology and concepts from various network modeling, simulation and topology generation tools.
Topological evolution of virtual social networks by modeling social activities
Sun, Xin; Dong, Junyu; Tang, Ruichun; Xu, Mantao; Qi, Lin; Cai, Yang
2015-09-01
With the development of Internet and wireless communication, virtual social networks are becoming increasingly important in the formation of nowadays' social communities. Topological evolution model is foundational and critical for social network related researches. Up to present most of the related research experiments are carried out on artificial networks, however, a study of incorporating the actual social activities into the network topology model is ignored. This paper first formalizes two mathematical abstract concepts of hobbies search and friend recommendation to model the social actions people exhibit. Then a social activities based topology evolution simulation model is developed to satisfy some well-known properties that have been discovered in real-world social networks. Empirical results show that the proposed topology evolution model has embraced several key network topological properties of concern, which can be envisioned as signatures of real social networks.
An Efficient Multitask Scheduling Model for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Hongsheng Yin
2014-01-01
Full Text Available The sensor nodes of multitask wireless network are constrained in performance-driven computation. Theoretical studies on the data processing model of wireless sensor nodes suggest satisfying the requirements of high qualities of service (QoS of multiple application networks, thus improving the efficiency of network. In this paper, we present the priority based data processing model for multitask sensor nodes in the architecture of multitask wireless sensor network. The proposed model is deduced with the M/M/1 queuing model based on the queuing theory where the average delay of data packets passing by sensor nodes is estimated. The model is validated with the real data from the Huoerxinhe Coal Mine. By applying the proposed priority based data processing model in the multitask wireless sensor network, the average delay of data packets in a sensor nodes is reduced nearly to 50%. The simulation results show that the proposed model can improve the throughput of network efficiently.
Vehicle Scheduling with Network Flow Models
Directory of Open Access Journals (Sweden)
Gustavo P. Silva
2010-04-01
Full Text Available
Este trabalho retrata a primeira fase de uma pesquisa de doutorado voltada para a utilização de modelos de fluxo em redes para programação de veículos (de ônibus, em particular. A utilização de modelos deste tipo ainda e muito pouco explorada na literatura, principalmente pela dificuldade imposta pelo grande numero de variáveis resultante. Neste trabalho são apresentadas formulações para tratamento do problema de programação de veículos associados a um único depósito (ou garagem como problema de fluxo em redes, incluindo duas técnicas para reduzir o numero de arcos na rede criada e, conseqüentemente, o numero de variáveis a tratar. Uma destas técnicas de redução de arcos foi implementada e o problema de fluxo resultante foi direcionado para ser resolvido, nesta fase da pesquisa, por uma versão disponível do algoritmo Simplex para redes. Problemas teste baseados em dados reais da cidade de Reading, UK, foram resolvidos com a utilização da formulação de fluxo em redes adotada, e os resultados comparados com aqueles obtidos pelo método heurístico BOOST, o qual tem sido largamente testado e comercializado pela School of Computer Studies da Universidade de Leeds, UK. Os resultados alcançados demonstram a possibilidade de tratamento de problemas reais com a técnica de redução de arcos.
ABSTRACT
This paper presents the successful results of a first phase of a doctoral research addressed to solving vehicle (bus, in particular scheduling problems through network flow formulations. Network flow modeling for this kind of problem is a promising, but not a well explored approach, mainly because of the large number of variables related to number of arcs of real case networks. The paper presents and discusses some network flow formulations for the single depot bus vehicle scheduling problem, along with two techniques of arc reduction. One of these arc reduction techniques has been implemented and the underlying
Bicriteria Models of Vehicles Recycling Network Facility Location
Merkisz-Guranowska, Agnieszka
2012-06-01
The paper presents the issues related to modeling of a vehicle recycling network. The functioning of the recycling network is within the realm of interest of a variety of government agendas, companies participating in the network, vehicle manufacturers and vehicle end users. The interests of these groups need to be considered when deciding about the network organization. The paper presents bicriteria models of network entity location that take into account the preferences of the vehicle owners and network participants related to the network construction and reorganization. A mathematical formulation of the optimization tasks has been presented including the objective functions and limitations that the solutions have to comply with. Then, the models were used for the network optimization in Poland.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
Models as Tools of Analysis of a Network Organisation
Directory of Open Access Journals (Sweden)
Wojciech Pająk
2013-06-01
Full Text Available The paper presents models which may be applied as tools of analysis of a network organisation. The starting point of the discussion is defining the following terms: supply chain and network organisation. Further parts of the paper present basic assumptions analysis of a network organisation. Then the study characterises the best known models utilised in analysis of a network organisation. The purpose of the article is to define the notion and the essence of network organizations and to present the models used for their analysis.
Candy, Adam S.; Pietrzak, Julie D.
2018-01-01
The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.
Natural Models for Evolution on Networks
Mertzios, George B; Raptopoulos, Christoforos; Spirakis, Paul G
2011-01-01
Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described my the Moran process. Recently, this approach has been generalized in \\cite{LHN} by arranging individuals on the nodes of a network. Undirected networks seem to have a smoother behavior than directed ones, and thus it is more challenging to find suppressors/amplifiers of selection. In this paper we present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in \\cite{LHN}, where all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new m...
A model for phosphate glass topology considering the modifying ion sub-network
DEFF Research Database (Denmark)
Hermansen, Christian; Mauro, J.C.; Yue, Yuanzheng
2014-01-01
In the present paper we establish a temperature dependent constraint model of alkali phosphate glasses considering the structural and topological role of the modifying ion sub-network constituted by alkali ions and their non-bonding oxygen coordination spheres. The model is consistent with availa...
DEFF Research Database (Denmark)
Dalgaard, Jens; Pena, Jose; Kocka, Tomas
2004-01-01
We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...
Towards a model-based development approach for wireless sensor-actuator network protocols
DEFF Research Database (Denmark)
Kumar S., A. Ajith; Simonsen, Kent Inge
2014-01-01
Model-Driven Software Engineering (MDSE) is a promising approach for the development of applications, and has been well adopted in the embedded applications domain in recent years. Wireless Sensor Actuator Networks consisting of resource constrained hardware and platformspecific operating system...... induced due to manual translations. With the use of formal semantics in the modeling approach, we can further ensure the correctness of the source model by means of verification. Also, with the use of network simulators and formal modeling tools, we obtain a verified and validated model to be used...
Lin, M. C.; Verboncoeur, J.
2016-10-01
A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.
Symbolic dynamics and computation in model gene networks.
Edwards, R.; Siegelmann, H. T.; Aziza, K.; Glass, L.
2001-03-01
We analyze a class of ordinary differential equations representing a simplified model of a genetic network. In this network, the model genes control the production rates of other genes by a logical function. The dynamics in these equations are represented by a directed graph on an n-dimensional hypercube (n-cube) in which each edge is directed in a unique orientation. The vertices of the n-cube correspond to orthants of state space, and the edges correspond to boundaries between adjacent orthants. The dynamics in these equations can be represented symbolically. Starting from a point on the boundary between neighboring orthants, the equation is integrated until the boundary is crossed for a second time. Each different cycle, corresponding to a different sequence of orthants that are traversed during the integration of the equation always starting on a boundary and ending the first time that same boundary is reached, generates a different letter of the alphabet. A word consists of a sequence of letters corresponding to a possible sequence of orthants that arise from integration of the equation starting and ending on the same boundary. The union of the words defines the language. Letters and words correspond to analytically computable Poincare maps of the equation. This formalism allows us to define bifurcations of chaotic dynamics of the differential equation that correspond to changes in the associated language. Qualitative knowledge about the dynamics found by integrating the equation can be used to help solve the inverse problem of determining the underlying network generating the dynamics. This work places the study of dynamics in genetic networks in a context comprising both nonlinear dynamics and the theory of computation. (c) 2001 American Institute of Physics.
Chiu, Chia-Yi; Köhn, Hans-Friedrich
2016-09-01
The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output "AND" gate (DINA) model and the Deterministic Input Noisy Output "OR" gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.