WorldWideScience

Sample records for network model validation

  1. Validation of protein structure models using network similarity score.

    Science.gov (United States)

    Ghosh, Sambit; Gadiyaram, Vasundhara; Vishveshwara, Saraswathi

    2017-09-01

    Accurate structural validation of proteins is of extreme importance in studies like protein structure prediction, analysis of molecular dynamic simulation trajectories and finding subtle changes in very similar structures. The benchmarks for today's structure validation are scoring methods like global distance test-total structure (GDT-TS), TM-score and root mean square deviations (RMSD). However, there is a lack of methods that look at both the protein backbone and side-chain structures at the global connectivity level and provide information about the differences in connectivity. To address this gap, a graph spectral based method (NSS-network similarity score) which has been recently developed to rigorously compare networks in diverse fields, is adopted to compare protein structures both at the backbone and at the side-chain noncovalent connectivity levels. In this study, we validate the performance of NSS by investigating protein structures from X-ray structures, modeling (including CASP models), and molecular dynamics simulations. Further, we systematically identify the local and the global regions of the structures contributing to the difference in NSS, through the components of the score, a feature unique to this spectral based scoring scheme. It is demonstrated that the method can quantify subtle differences in connectivity compared to a reference protein structure and can form a robust basis for protein structure comparison. Additionally, we have also introduced a network-based method to analyze fluctuations in side chain interactions (edge-weights) in an ensemble of structures, which can be an useful tool for the analysis of MD trajectories. © 2017 Wiley Periodicals, Inc.

  2. Validating neural-network refinements of nuclear mass models

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  3. Validation & verification of a Bayesian network model for aircraft vulnerability

    CSIR Research Space (South Africa)

    Schietekat, Sunelle

    2016-09-01

    Full Text Available This process describes how the model is exercised and evaluated for its intended use. 12th INCOSE SA Systems Engineering Conference ISBN 978-0-620-72719-8 Page 094 Conceptual Model Validation Concerned with validating that the conceptual model... as input to the model and to create the Conditional Probability Tables (CPTs). These include the expert knowledge data and CmSim2 simulation data. Inferencing - When the model is exercised for the different use cases in a what-if analysis. The different...

  4. Validation of protein models by a neural network approach

    Directory of Open Access Journals (Sweden)

    Fantucci Piercarlo

    2008-01-01

    Full Text Available Abstract Background The development and improvement of reliable computational methods designed to evaluate the quality of protein models is relevant in the context of protein structure refinement, which has been recently identified as one of the bottlenecks limiting the quality and usefulness of protein structure prediction. Results In this contribution, we present a computational method (Artificial Intelligence Decoys Evaluator: AIDE which is able to consistently discriminate between correct and incorrect protein models. In particular, the method is based on neural networks that use as input 15 structural parameters, which include energy, solvent accessible surface, hydrophobic contacts and secondary structure content. The results obtained with AIDE on a set of decoy structures were evaluated using statistical indicators such as Pearson correlation coefficients, Znat, fraction enrichment, as well as ROC plots. It turned out that AIDE performances are comparable and often complementary to available state-of-the-art learning-based methods. Conclusion In light of the results obtained with AIDE, as well as its comparison with available learning-based methods, it can be concluded that AIDE can be successfully used to evaluate the quality of protein structures. The use of AIDE in combination with other evaluation tools is expected to further enhance protein refinement efforts.

  5. Credibility and validation of simulation models for tactical IP networks

    NARCIS (Netherlands)

    Boltjes, B.; Thiele, F.; Diaz, I.F.

    2007-01-01

    The task of TNO is to provide predictions of the scalability and performance of the new all-IP tactical networks of the Royal Netherlands Army (RNLA) that are likely to be fielded. The inherent properties of fielded tactical networks, such as low bandwidth and Quality of Service (QoS) policies

  6. A solvable queueing network model for railway networks and its validation and applications for the Netherlands

    NARCIS (Netherlands)

    Huisman, Tijs; Boucherie, Richardus J.; van Dijk, N.M.

    2002-01-01

    The performance of new railway networks cannot be measured or simulated, as no detailed train schedules are available. Railway infrastructure and capacities are to be determined long before the actual traffic is known. This paper therefore proposes a solvable queueing network model to compute

  7. Multi-criteria validation of artificial neural network rainfall-runoff modeling

    Directory of Open Access Journals (Sweden)

    R. Modarres

    2009-03-01

    Full Text Available In this study we propose a comprehensive multi-criteria validation test for rainfall-runoff modeling by artificial neural networks. This study applies 17 global statistics and 3 additional non-parametric tests to evaluate the ANNs. The weakness of global statistics for validation of ANN is demonstrated by rainfall-runoff modeling of the Plasjan Basin in the western region of the Zayandehrud watershed, Iran. Although the global statistics showed that the multi layer perceptron with 4 hidden layers (MLP4 is the best ANN for the basin comparing with other MLP networks and empirical regression model, the non-parametric tests illustrate that neither the ANNs nor the regression model are able to reproduce the probability distribution of observed runoff in validation phase. However, the MLP4 network is the best network to reproduce the mean and variance of the observed runoff based on non-parametric tests. The performance of ANNs and empirical model was also demonstrated for low, medium and high flows. Although the MLP4 network gives the best performance among ANNs for low, medium and high flows based on different statistics, the empirical model shows better results. However, none of the models is able to simulate the frequency distribution of low, medium and high flows according to non-parametric tests. This study illustrates that the modelers should select appropriate and relevant evaluation measures from the set of existing metrics based on the particular requirements of each individual applications.

  8. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    Energy Technology Data Exchange (ETDEWEB)

    Abarbanel, Henry [Univ. of California, San Diego, CA (United States); Gill, Philip [Univ. of California, San Diego, CA (United States)

    2015-03-31

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  9. Modeling users' activity on twitter networks: validation of Dunbar's number.

    Directory of Open Access Journals (Sweden)

    Bruno Gonçalves

    Full Text Available Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  10. Modeling users' activity on Twitter networks: validation of Dunbar's number

    Science.gov (United States)

    Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2012-02-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  11. Validation and comparison of geostatistical and spline models for spatial stream networks.

    Science.gov (United States)

    Rushworth, A M; Peterson, E E; Ver Hoef, J M; Bowman, A W

    2015-08-01

    Scientists need appropriate spatial-statistical models to account for the unique features of stream network data. Recent advances provide a growing methodological toolbox for modelling these data, but general-purpose statistical software has only recently emerged, with little information about when to use different approaches. We implemented a simulation study to evaluate and validate geostatistical models that use continuous distances, and penalised spline models that use a finite discrete approximation for stream networks. Data were simulated from the geostatistical model, with performance measured by empirical prediction and fixed effects estimation. We found that both models were comparable in terms of squared error, with a slight advantage for the geostatistical models. Generally, both methods were unbiased and had valid confidence intervals. The most marked differences were found for confidence intervals on fixed-effect parameter estimates, where, for small sample sizes, the spline models underestimated variance. However, the penalised spline models were always more computationally efficient, which may be important for real-time prediction and estimation. Thus, decisions about which method to use must be influenced by the size and format of the data set, in addition to the characteristics of the environmental process and the modelling goals. ©2015 The Authors. Environmetrics published by John Wiley & Sons, Ltd.

  12. Validation of a Novel Traditional Chinese Medicine Pulse Diagnostic Model Using an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Anson Chui Yan Tang

    2012-01-01

    Full Text Available In view of lacking a quantifiable traditional Chinese medicine (TCM pulse diagnostic model, a novel TCM pulse diagnostic model was introduced to quantify the pulse diagnosis. Content validation was performed with a panel of TCM doctors. Criterion validation was tested with essential hypertension. The gold standard was brachial blood pressure measured by a sphygmomanometer. Two hundred and sixty subjects were recruited (139 in the normotensive group and 121 in the hypertensive group. A TCM doctor palpated pulses at left and right cun, guan, and chi points, and quantified pulse qualities according to eight elements (depth, rate, regularity, width, length, smoothness, stiffness, and strength on a visual analog scale. An artificial neural network was used to develop a pulse diagnostic model differentiating essential hypertension from normotension. Accuracy, specificity, and sensitivity were compared among various diagnostic models. About 80% accuracy was attained among all models. Their specificity and sensitivity varied, ranging from 70% to nearly 90%. It suggested that the novel TCM pulse diagnostic model was valid in terms of its content and diagnostic ability.

  13. An active distribution network model for smart grid control and protection studies : model validation progress

    OpenAIRE

    Mahseredjian, Jean; Haddadi, Aboutaleb; HOOSHYAR, Hossein; Vanfretti, Luigi; Dufour, Christian

    2017-01-01

    This paper presents the implementation of an Active Distribution Network (ADN) model and its qualitative assessment using different off-line and real-time simulation tools. The objective is to provide software-to-software verification for the establishment of the model as a potential benchmark. Expanding upon the authors’ previous work [7], this paper provides additional simulation results, cross-examination of the models, and presents the latest modifications incorporated to address practica...

  14. Using a small scale wireless sensor network for model validation. Two case studies

    Energy Technology Data Exchange (ETDEWEB)

    Lengfeld, Katharina; Ament, Felix [Hamburg Univ. (Germany). Meteorological Inst.; Zacharias, Stefan [Deutscher Wetterdienst, Freiburg im Breisgau (Germany)

    2013-10-15

    In this paper, the potential of a network consisting of low cost weather stations for validating microscale model simulations and for forcing surface-atmosphere-transfer-schemes is investigated within two case studies. Transfer schemes often do not account for small scale variabilities of the earth surface, because measurements of the atmospheric conditions do not exist in such a high spatial resolution to force the models. To overcome this issue, in this study a small scale network of meteorological stations is used to derive measurements in high spatial and temporal resolution. The observations carried out during the measurement campaign are compared to air temperature and specific humidity simulations of the mesoscale atmospheric model FOOT3DK (Flow Over Orographically-Structured Terrain - 3 Dimensional Model (Koelner Version)). This comparison indicates that FOOT3DK simulates either air temperature or specific humidity satisfactorily for each station at the lowest model level, depending on the dominating land use class within each grid cell. The influence of heterogeneous forcing and vegetation on heat flux modelling is studied using the soil-vegetation-atmosphere transfer scheme TERRA. The observations of the measurement campaign are used as input for four different runs with homogeneous and heterogeneous forcing and vegetation. Heterogeneous vegetation reduces the bias between the grid cells, heterogeneous forcing reduces the random error for each grid cell. (orig.)

  15. Validation Study of CODES Dragonfly Network Model with Theta Cray XC System

    Energy Technology Data Exchange (ETDEWEB)

    Mubarak, Misbah [Argonne National Lab. (ANL), Argonne, IL (United States); Ross, Robert B. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-05-31

    This technical report describes the experiments performed to validate the MPI performance measurements reported by the CODES dragonfly network simulation with the Theta Cray XC system at the Argonne Leadership Computing Facility (ALCF).

  16. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    Science.gov (United States)

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  17. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network...... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...... component that links the nodes causes a variation of the background harmonic voltage distortion at the nodes. Harmonic current in the series element is measured together with the harmonic voltages at both nodes, before and after the switching. Obtained incremental values of harmonic voltages and the current...

  18. Network testbed creation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  19. Network testbed creation and validation

    Science.gov (United States)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-04-18

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  20. Validation of artificial neural network models for predicting biochemical markers associated with male infertility.

    Science.gov (United States)

    Vickram, A S; Kamini, A Rao; Das, Raja; Pathy, M Ramesh; Parameswari, R; Archana, K; Sridharan, T B

    2016-08-01

    Seminal fluid is the secretion from many glands comprised of several organic and inorganic compounds including free amino acids, proteins, fructose, glucosidase, zinc, and other scavenging elements like Mg(2+), Ca(2+), K(+), and Na(+). Therefore, in the view of development of novel approaches and proper diagnosis to male infertility, overall understanding of the biochemical and molecular composition and its role in regulation of sperm quality is highly desirable. Perhaps this can be achieved through artificial intelligence. This study was aimed to elucidate and predict various biochemical markers present in human seminal plasma with three different neural network models. A total of 177 semen samples were collected for this research (both fertile and infertile samples) and immediately processed to prepare a semen analysis report, based on the protocol of the World Health Organization (WHO [2010]). The semen samples were then categorized into oligoasthenospermia (n=35), asthenospermia (n=35), azoospermia (n=22), normospermia (n=34), oligospermia (n=34), and control (n=17). The major biochemical parameters like total protein content, fructose, glucosidase, and zinc content were elucidated by standard protocols. All the biochemical markers were predicted by using three different artificial neural network (ANN) models with semen parameters as inputs. Of the three models, the back propagation neural network model (BPNN) yielded the best results with mean absolute error 0.025, -0.080, 0.166, and -0.057 for protein, fructose, glucosidase, and zinc, respectively. This suggests that BPNN can be used to predict biochemical parameters for the proper diagnosis of male infertility in assisted reproductive technology (ART) centres. AAS: absorption spectroscopy; AI: artificial intelligence; ANN: artificial neural networks; ART: assisted reproductive technology; BPNN: back propagation neural network model; DT: decision tress; MLP: multilayer perceptron; PESA: percutaneous

  1. Validation of Tilt Gain under Realistic Path Loss Model and Network Scenario

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; Rodriguez, Ignacio; Sørensen, Troels Bundgaard

    2013-01-01

    Despite being a simple and commonly-applied radio optimization technique, the impact on practical network performance from base station antenna downtilt is not well understood. Most published studies based on empirical path loss models report tilt angles and performance gains that are far higher...... than practical experience suggests. We motivate in this paper, based on a practical LTE scenario, that the discrepancy partly lies in the path loss model, and shows that a more detailed semi-deterministic model leads to both lower gains in terms of SINR, outage probability and downlink throughput...... and lower optimum tilt settings. Furthermore, we show that a simple geometrically based tilt optimization algorithm can outperform other tilt profiles, including the setting applied by the cellular operator in the specific case. In general, the network performance is not highly sensitive to the tilt...

  2. Validation and quantification of uncertainty in coupled climate models using network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bracco, Annalisa [Georgia Inst. of Technology, Atlanta, GA (United States)

    2015-08-10

    We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets

  3. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  4. A Global Lake Ecological Observatory Network (GLEON) for synthesising high-frequency sensor data for validation of deterministic ecological models

    Science.gov (United States)

    David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D

    2014-01-01

    A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.

  5. Calibration and validation of a genetic regulatory network model describing the production of the protein Hunchback in Drosophila early development.

    Science.gov (United States)

    Dilão, Rui; Muraro, Daniele

    2010-01-01

    We fit the parameters of a differential equations model describing the production of gap-gene proteins Hunchback and Knirps along the antero-posterior axis of the embryo of Drosophila. As initial data for the differential equations model, we take the antero-posterior distribution of the proteins Bicoid, Hunchback and Tailless at the beginning of cleavage cycle 14. We calibrate and validate the model with experimental data using single- and multi-objective evolutionary optimization techniques. In the multi-objective optimization technique, we compute the associated Pareto fronts. We analyze the cross regulation mechanism between the gap-genes protein pair Hunchback-Knirps and we show that the posterior distribution of Hunchback follow the experimental data if Hunchback is negatively regulated by the Huckebein protein. This approach enables to us predict the posterior localization on the embryo of the protein Huckebein, and to validate with the experimental data the genetic regulatory network responsible for the antero-posterior distribution of the gap-gene protein Hunchback. We discuss the importance of Pareto multi-objective optimization techniques in the calibration and validation of biological models. 2010 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  6. Influential Factors of Collaborative Networks in Manufacturing: Validation of a Conceptual Model

    OpenAIRE

    Danny Wee Hock Quik; Nevan Wright; Ammar Rashid; Sivadass Thiruchelvam

    2015-01-01

    The purpose of the study is to identify influential factors in the use of collaborative networks within the context of manufacturing. The study aims to investigate factors that influence employees’ learning, and to bridge the gap between theory and praxis in collaborative networks in manufacturing. The study further extends the boundary of a collaborative network beyond enterprises to include suppliers, customers, and external stakeholders. It provides a holistic perspective of collaborative ...

  7. Modelling and Initial Validation of the DYMO Routing Protocol for Mobile Ad-Hoc Networks

    DEFF Research Database (Denmark)

    Espensen, Kristian Asbjørn Leth; Kjeldsen, Mads Keblov; Kristensen, Lars Michael

    2008-01-01

    A mobile ad-hoc network (MANET) is an infrastructureless network established by a set of mobile devices using wireless communication. The Dynamic MANET On-demand (DYMO) protocol is a routing protocol for multi-hop communication in MANETs currently under development by the Internet Engineering Task...

  8. Modeling Epidemic Network Failures

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Fagertun, Anna Manolova

    2013-01-01

    This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...

  9. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...... of models has been somewhat narrow-minded reducing the notion of validation to establishment of truth. This article puts forward the diversity in applications of simulation models that demands a corresponding diversity in the notion of validation....

  10. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  11. Perpetual Model Validation

    Science.gov (United States)

    2017-03-01

    2014 – SEP 2016 4. TITLE AND SUBTITLE PERPETUAL MODEL VALIDATION 5a. CONTRACT NUMBER IN HOUSE –R1AK 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT...Trustworthy architectures for system of systems; Modeling , assessment, and vulnerability analysis; Assessment and measurement for end-to-end system analysis...PERPETUAL MODEL VALIDATION MARCH 2017 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED STINFO COPY AIR FORCE RESEARCH

  12. Characteristic Time Model Validation

    Science.gov (United States)

    1988-09-01

    Characteristic Time Model Validation Final Technical Report .’ ". Tallio, R.C. Prior, Jr., and A. M. Mellor* U.S. Army Research Office Contract...Park, NC 27709-2211 I N I 11, TITLE (Include Securrty Cassification) Characteristic Time Model Validation (unclassified)512 PERSONAL AUTHOR(S) Tallio...number) FIELD GROUP SUB-GROUP Two-dimensional confined shear layers; two-dimensional prefilming airblast atomizers; characteristic time model; finite

  13. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  14. Experimental Testing and Model Validation of a Decoupled-Phase On-Load Tap Changer Transformer in an Active Network

    DEFF Research Database (Denmark)

    Zecchino, Antonio; Hu, Junjie; Coppo, Massimiliano

    2016-01-01

    this problem, distribution transformers with on-load tapping capability are under development. This paper presents model and experimental validation of a 35 kVA three-phase power distribution transformer with independent on-load tap changer control capability on each phase. With the purpose of investigating...... to reproduce the main feature of an unbalanced grid. The experimental activities are recreated in by carrying out dynamics simulation studies, aiming at validating the implemented models of both the transformer as well as the other grid components. Phase-neutral voltages’ deviations are limited, proving...

  15. Collaborative networks: Reference modeling

    NARCIS (Netherlands)

    Camarinha-Matos, L.M.; Afsarmanesh, H.

    2008-01-01

    Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of

  16. Innovation, Product Development, and New Business Models in Networks: How to come from case studies to a valid and operational theory

    DEFF Research Database (Denmark)

    Rasmussen, Erik Stavnsager; Jørgensen, Jacob Høj; Goduscheit, René Chester

    2007-01-01

    We have in the research project NEWGIBM (New Global ICT based Business Models) during 2005 and 2006 closely cooperated with a group of firms. The focus in the project has been development of new business models (and innovation) in close cooperation with multiple partners. These partners have been...... customers, suppliers, R&D partners, and others. The methodological problem is thus, how to come from e.g. one in-depth case study to a more formalized theory or model on how firms can develop new projects and be innovative in a network. The paper is structured so that it starts with a short presentation...... of the two key concepts in our research setting and theoretical models: Innovation and networks. It is not our intention in this paper to present a lengthy discussion of the two concepts, but a short presentation is necessary to understand the validity and interpretation discussion later in the paper. Next...

  17. Validated dynamic flow model

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2011-01-01

    model structure suggested by University of Lund the WP4 leader. This particular model structure has the advantages that it fits better into the control design frame work used by WP3-4 compared to the model structures previously developed in WP2. The different model structures are first summarised....... Then issues dealing with optimal experimental design is considered. Finally the parameters are estimated in the chosen static and dynamic models and a validation is performed. Two of the static models, one of them the additive model, explains the data well. In case of dynamic models the suggested additive...... model turns out not to be useful for prediction of the flow. Moreover, standard Box Jenkins model structures and multiple output auto regressive models proves to be superior as they can give useful predictions of the flow....

  18. Multiobjective Optimization Design of Spinal Pedicle Screws Using Neural Networks and Genetic Algorithm: Mathematical Models and Mechanical Validation

    Directory of Open Access Journals (Sweden)

    Yongyut Amaritsakul

    2013-01-01

    Full Text Available Short-segment instrumentation for spine fractures is threatened by relatively high failure rates. Failure of the spinal pedicle screws including breakage and loosening may jeopardize the fixation integrity and lead to treatment failure. Two important design objectives, bending strength and pullout strength, may conflict with each other and warrant a multiobjective optimization study. In the present study using the three-dimensional finite element (FE analytical results based on an L25 orthogonal array, bending and pullout objective functions were developed by an artificial neural network (ANN algorithm, and the trade-off solutions known as Pareto optima were explored by a genetic algorithm (GA. The results showed that the knee solutions of the Pareto fronts with both high bending and pullout strength ranged from 92% to 94% of their maxima, respectively. In mechanical validation, the results of mathematical analyses were closely related to those of experimental tests with a correlation coefficient of −0.91 for bending and 0.93 for pullout (P<0.01 for both. The optimal design had significantly higher fatigue life (P<0.01 and comparable pullout strength as compared with commercial screws. Multiobjective optimization study of spinal pedicle screws using the hybrid of ANN and GA could achieve an ideal with high bending and pullout performances simultaneously.

  19. Multiobjective optimization design of spinal pedicle screws using neural networks and genetic algorithm: mathematical models and mechanical validation.

    Science.gov (United States)

    Amaritsakul, Yongyut; Chao, Ching-Kong; Lin, Jinn

    2013-01-01

    Short-segment instrumentation for spine fractures is threatened by relatively high failure rates. Failure of the spinal pedicle screws including breakage and loosening may jeopardize the fixation integrity and lead to treatment failure. Two important design objectives, bending strength and pullout strength, may conflict with each other and warrant a multiobjective optimization study. In the present study using the three-dimensional finite element (FE) analytical results based on an L25 orthogonal array, bending and pullout objective functions were developed by an artificial neural network (ANN) algorithm, and the trade-off solutions known as Pareto optima were explored by a genetic algorithm (GA). The results showed that the knee solutions of the Pareto fronts with both high bending and pullout strength ranged from 92% to 94% of their maxima, respectively. In mechanical validation, the results of mathematical analyses were closely related to those of experimental tests with a correlation coefficient of -0.91 for bending and 0.93 for pullout (P < 0.01 for both). The optimal design had significantly higher fatigue life (P < 0.01) and comparable pullout strength as compared with commercial screws. Multiobjective optimization study of spinal pedicle screws using the hybrid of ANN and GA could achieve an ideal with high bending and pullout performances simultaneously.

  20. Prediction of the hardness profile of an AISI 4340 steel cylinder heat-treated by laser - 3D and artificial neural networks modelling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Hadhri, Mahdi; Ouafi, Abderazzak El; Barka, Noureddine [University of Quebec, Rimouski (Canada)

    2017-02-15

    This paper presents a comprehensive approach developed to design an effective prediction model for hardness profile in laser surface transformation hardening process. Based on finite element method and Artificial neural networks, the proposed approach is built progressively by (i) examining the laser hardening parameters and conditions known to have an influence on the hardened surface attributes through a structured experimental investigation, (ii) investigating the laser hardening parameters effects on the hardness profile through extensive 3D modeling and simulation efforts and (ii) integrating the hardening process parameters via neural network model for hardness profile prediction. The experimental validation conducted on AISI4340 steel using a commercial 3 kW Nd:Yag laser, confirm the feasibility and efficiency of the proposed approach leading to an accurate and reliable hardness profile prediction model. With a maximum relative error of about 10 % under various practical conditions, the predictive model can be considered as effective especially in the case of a relatively complex system such as laser surface transformation hardening process.

  1. Validation of the revised stressful life event questionnaire using a hybrid model of genetic algorithm and artificial neural networks.

    Science.gov (United States)

    Sali, Rasoul; Roohafza, Hamidreza; Sadeghi, Masoumeh; Andalib, Elham; Shavandi, Hassan; Sarrafzadegan, Nizal

    2013-01-01

    Stressors have a serious role in precipitating mental and somatic disorders and are an interesting subject for many clinical and community-based studies. Hence, the proper and accurate measurement of them is very important. We revised the stressful life event (SLE) questionnaire by adding weights to the events in order to measure and determine a cut point. A total of 4569 adults aged between 18 and 85 years completed the SLE questionnaire and the general health questionnaire-12 (GHQ-12). A hybrid model of genetic algorithm (GA) and artificial neural networks (ANNs) was applied to extract the relation between the stressful life events (evaluated by a 6-point Likert scale) and the GHQ score as a response variable. In this model, GA is used in order to set some parameter of ANN for achieving more accurate results. For each stressful life event, the number is defined as weight. Among all stressful life events, death of parents, spouse, or siblings is the most important and impactful stressor in the studied population. Sensitivity of 83% and specificity of 81% were obtained for the cut point 100. The SLE-revised (SLE-R) questionnaire despite simplicity is a high-performance screening tool for investigating the stress level of life events and its management in both community and primary care settings. The SLE-R questionnaire is user-friendly and easy to be self-administered. This questionnaire allows the individuals to be aware of their own health status.

  2. NC truck network model development research.

    Science.gov (United States)

    2008-09-01

    This research develops a validated prototype truck traffic network model for North Carolina. The model : includes all counties and metropolitan areas of North Carolina and major economic areas throughout the : U.S. Geographic boundaries, population a...

  3. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  4. Application, validation and comparison in different geographical contexts of an integrated model for the design of ecological networks

    OpenAIRE

    C. R. Fichera; R. Gianoglio; Laudari, L; Modica, G.

    2013-01-01

    The issue of the fragmentation of natural habitats is increasingly at the core of the scientific debate, yet it is not taken into account in planning tools, with particular reference to the dynamism and complexity of landscapes. As it has been recognised at a European level, in order to enable different species to remain in good functional status, a network of green infrastructures is required. The concept of “ecological island” is no longer sufficient to adequately protect the fauna and the ...

  5. Modeling the citation network by network cosmology.

    Science.gov (United States)

    Xie, Zheng; Ouyang, Zhenzheng; Zhang, Pengyuan; Yi, Dongyun; Kong, Dexing

    2015-01-01

    Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.

  6. Modeling the citation network by network cosmology.

    Directory of Open Access Journals (Sweden)

    Zheng Xie

    Full Text Available Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.

  7. Brain Network Modelling

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther

    Three main topics are presented in this thesis. The first and largest topic concerns network modelling of functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). In particular nonparametric Bayesian methods are used to model brain networks derived from resting state f...... for their ability to reproduce node clustering and predict unseen data. Comparing the models on whole brain networks, BCD and IRM showed better reproducibility and predictability than IDM, suggesting that resting state networks exhibit community structure. This also points to the importance of using models, which...... allow for complex interactions between all pairs of clusters. In addition, it is demonstrated how the IRM can be used for segmenting brain structures into functionally coherent clusters. A new nonparametric Bayesian network model is presented. The model builds upon the IRM and can be used to infer...

  8. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  9. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  10. Modeling network technology deployment rates with different network models

    OpenAIRE

    Chung, Yoo

    2011-01-01

    To understand the factors that encourage the deployment of a new networking technology, we must be able to model how such technology gets deployed. We investigate how network structure influences deployment with a simple deployment model and different network models through computer simulations. The results indicate that a realistic model of networking technology deployment should take network structure into account.

  11. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  12. Testing and validating environmental models

    Science.gov (United States)

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  13. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  14. Validating Dart Model

    Directory of Open Access Journals (Sweden)

    Mazur Jolanta

    2014-12-01

    Full Text Available The primary objective of the study was to quantitatively test the DART model, which despite being one of the most popular representations of co-creation concept was so far studied almost solely with qualitative methods. To this end, the researchers developed a multiple measurement scale and employed it in interviewing managers. The statistical evidence for adequacy of the model was obtained through CFA with AMOS software. The findings suggest that the DART model may not be an accurate representation of co-creation practices in companies. From the data analysis it was evident that the building blocks of DART had too much of conceptual overlap to be an effective framework for quantitative analysis. It was also implied that the phenomenon of co-creation is so rich and multifaceted that it may be more adequately captured by a measurement model where co-creation is conceived as a third-level factor with two layers of intermediate latent variables.

  15. Uncertainty Modeling Via Frequency Domain Model Validation

    Science.gov (United States)

    Waszak, Martin R.; Andrisani, Dominick, II

    1999-01-01

    Abstract The majority of literature on robust control assumes that a design model is available and that the uncertainty model bounds the actual variations about the nominal model. However, methods for generating accurate design models have not received as much attention in the literature. The influence of the level of accuracy of the uncertainty model on closed loop performance has received even less attention. The research reported herein is an initial step in applying and extending the concept of model validation to the problem of obtaining practical uncertainty models for robust control analysis and design applications. An extension of model validation called 'sequential validation' is presented and applied to a simple spring-mass-damper system to establish the feasibility of the approach and demonstrate the benefits of the new developments.

  16. Models of educational institutions' networking

    OpenAIRE

    Shilova Olga Nikolaevna

    2015-01-01

    The importance of educational institutions' networking in modern sociocultural conditions and a definition of networking in education are presented in the article. The results of research levels, methods and models of educational institutions' networking are presented and substantially disclosed.

  17. GPM ground validation via commercial cellular networks: an exploratory approach

    Science.gov (United States)

    Rios Gaona, Manuel Felipe; Overeem, Aart; Leijnse, Hidde; Brasjen, Noud; Uijlenhoet, Remko

    2016-04-01

    The suitability of commercial microwave link networks for ground validation of GPM (Global Precipitation Measurement) data is evaluated here. Two state-of-the-art rainfall products are compared over the land surface of the Netherlands for a period of 7 months, i.e., rainfall maps from commercial cellular communication networks and Integrated Multi-satellite Retrievals for GPM (IMERG). Commercial microwave link networks are nowadays the core component in telecommunications worldwide. Rainfall rates can be retrieved from measurements of attenuation between transmitting and receiving antennas. If adequately set up, these networks enable rainfall monitoring tens of meters above the ground at high spatiotemporal resolutions (temporal sampling of seconds to tens of minutes, and spatial sampling of hundreds of meters to tens of kilometers). The GPM mission is the successor of TRMM (Tropical Rainfall Measurement Mission). For two years now, IMERG offers rainfall estimates across the globe (180°W - 180°E and 60°N - 60°S) at spatiotemporal resolutions of 0.1° x 0.1° every 30 min. These two data sets are compared against a Dutch gauge-adjusted radar data set, considered to be the ground truth given its accuracy, spatiotemporal resolution and availability. The suitability of microwave link networks in satellite rainfall evaluation is of special interest, given the independent character of this technique, its high spatiotemporal resolutions and availability. These are valuable assets for water management and modeling of floods, landslides, and weather extremes; especially in places where rain gauge networks are scarce or poorly maintained, or where weather radar networks are too expensive to acquire and/or maintain.

  18. Graphical Model Theory for Wireless Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Davis, William B.

    2002-12-08

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  19. Techniques for Modelling Network Security

    OpenAIRE

    Lech Gulbinovič

    2012-01-01

    The article compares modelling techniques for network security, including the theory of probability, Markov processes, Petri networks and application of stochastic activity networks. The paper introduces the advantages and disadvantages of the above proposed methods and accepts the method of modelling the network of stochastic activity as one of the most relevant. The stochastic activity network allows modelling the behaviour of the dynamic system where the theory of probability is inappropri...

  20. Turbulence Modeling Verification and Validation

    Science.gov (United States)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important

  1. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  2. Modeling Network Interdiction Tasks

    Science.gov (United States)

    2015-09-17

    allow professionals and families to stay in touch through voice or video calls. Power grids provide electricity to homes , offices, and recreational...instances using IBMr ILOGr CPLEXr Optimization Studio V12.6. For each instance, two solutions are deter- mined. First, the MNDP-a model is solved with no...three values: 0.25, 0.50, or 0.75. The DMP-a model is solved for the various random network instances using IBMr ILOGr CPLEXr Optimization Studio V12.6

  3. Design and regularization of neural networks: the optimal use of a validation set

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai; Svarer, Claus

    1996-01-01

    We derive novel algorithms for estimation of regularization parameters and for optimization of neural net architectures based on a validation set. Regularisation parameters are estimated using an iterative gradient descent scheme. Architecture optimization is performed by approximative...... combinatorial search among the relevant subsets of an initial neural network architecture by employing a validation set based optimal brain damage/surgeon (OBD/OBS) or a mean field combinatorial optimization approach. Numerical results with linear models and feed-forward neural networks demonstrate...

  4. Enhanced data validation strategy of air quality monitoring network.

    Science.gov (United States)

    Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem

    2017-10-05

    Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Coevolutionary modeling in network formation

    KAUST Repository

    Al-Shyoukh, Ibrahim

    2014-12-03

    Network coevolution, the process of network topology evolution in feedback with dynamical processes over the network nodes, is a common feature of many engineered and natural networks. In such settings, the change in network topology occurs at a comparable time scale to nodal dynamics. Coevolutionary modeling offers the possibility to better understand how and why network structures emerge. For example, social networks can exhibit a variety of structures, ranging from almost uniform to scale-free degree distributions. While current models of network formation can reproduce these structures, coevolutionary modeling can offer a better understanding of the underlying dynamics. This paper presents an overview of recent work on coevolutionary models of network formation, with an emphasis on the following three settings: (i) dynamic flow of benefits and costs, (ii) transient link establishment costs, and (iii) latent preferential attachment.

  6. (Validity of environmental transfer models)

    Energy Technology Data Exchange (ETDEWEB)

    Blaylock, B.G.; Hoffman, F.O.; Gardner, R.H.

    1990-11-07

    BIOMOVS (BIOspheric MOdel Validation Study) is an international cooperative study initiated in 1985 by the Swedish National Institute of Radiation Protection to test models designed to calculate the environmental transfer and bioaccumulation of radionuclides and other trace substances. The objective of the symposium and workshop was to synthesize results obtained during Phase 1 of BIOMOVS (the first five years of the study) and to suggest new directions that might be pursued during Phase 2 of BIOMOVS. The travelers were an instrumental part of the development of BIOMOVS. This symposium allowed the travelers to present a review of past efforts at model validation and a synthesis of current activities and to refine ideas concerning future development of models and data for assessing the fate, effect, and human risks of environmental contaminants. R. H. Gardner also visited the Free University, Amsterdam, and the National Institute of Public Health and Environmental Protection (RIVM) in Bilthoven to confer with scientists about current research in theoretical ecology and the use of models for estimating the transport and effect of environmental contaminants and to learn about the European efforts to map critical loads of acid deposition.

  7. Context discovery using attenuated Bloom codes: model description and validation

    NARCIS (Netherlands)

    Liu, F.; Heijenk, Geert

    A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been

  8. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  9. Do Network Models Just Model Networks? On The Applicability of Network-Oriented Modeling

    NARCIS (Netherlands)

    Treur, J.; Shmueli, Erez

    2017-01-01

    In this paper for a Network-Oriented Modelling perspective based on temporal-causal networks it is analysed how generic and applicable it is as a general modelling approach and as a computational paradigm. This results in an answer to the question in the title different from: network models just

  10. Guidance for the verification and validation of neural networks

    CERN Document Server

    Pullum, L; Darrah, M

    2007-01-01

    Guidance for the Verification and Validation of Neural Networks is a supplement to the IEEE Standard for Software Verification and Validation, IEEE Std 1012-1998. Born out of a need by the National Aeronautics and Space Administration's safety- and mission-critical research, this book compiles over five years of applied research and development efforts. It is intended to assist the performance of verification and validation (V&V) activities on adaptive software systems, with emphasis given to neural network systems. The book discusses some of the difficulties with trying to assure adaptive systems in general, presents techniques and advice for the V&V practitioner confronted with such a task, and based on a neural network case study, identifies specific tasking and recommendations for the V&V of neural network systems.

  11. Modelling Users` Trust in Online Social Networks

    Directory of Open Access Journals (Sweden)

    Iacob Cătoiu

    2014-02-01

    Full Text Available Previous studies (McKnight, Lankton and Tripp, 2011; Liao, Lui and Chen, 2011 have shown the crucial role of trust when choosing to disclose sensitive information online. This is the case of online social networks users, who must disclose a certain amount of personal data in order to gain access to these online services. Taking into account privacy calculus model and the risk/benefit ratio, we propose a model of users’ trust in online social networks with four variables. We have adapted metrics for the purpose of our study and we have assessed their reliability and validity. We use a Partial Least Squares (PLS based structural equation modelling analysis, which validated all our initial assumptions, indicating that our three predictors (privacy concerns, perceived benefits and perceived risks explain 48% of the variation of users’ trust in online social networks, the resulting variable of our study. We also discuss the implications and further research opportunities of our study.

  12. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  13. A validated regulatory network for Th17 cell specification

    Science.gov (United States)

    Ciofani, Maria; Madar, Aviv; Galan, Carolina; Sellars, Maclean; Mace, Kieran; Pauli, Florencia; Agarwal, Ashish; Huang, Wendy; Parkhurst, Christopher N.; Muratet, Michael; Newberry, Kim M.; Meadows, Sarah; Greenfield, Alex; Yang, Yi; Jain, Preti; Kirigin, Francis F.; Birchmeier, Carmen; Wagner, Erwin F.; Murphy, Kenneth M.; Myers, Richard M.; Bonneau, Richard; Littman, Dan R.

    2012-01-01

    Th17 cells have critical roles in mucosal defense and are major contributors to inflammatory disease. Their differentiation requires the nuclear hormone receptor RORγt working with multiple other essential transcription factors (TFs). We have used an iterative systems approach, combining genome-wide TF occupancy, expression profiling of TF mutants, and expression time series to delineate the Th17 global transcriptional regulatory network. We find that cooperatively-bound BATF and IRF4 contribute to initial chromatin accessibility, and with STAT3 initiate a transcriptional program that is then globally tuned by the lineage-specifying TF RORγt, which plays a focal deterministic role at key loci. Integration of multiple datasets allowed inference of an accurate predictive model that we computationally and experimentally validated, identifying multiple new Th17 regulators, including Fosl2, a key determinant of cellular plasticity. This interconnected network can be used to investigate new therapeutic approaches to manipulate Th17 functions in the setting of inflammatory disease. PMID:23021777

  14. Comparison and validation of community structures in complex networks

    Science.gov (United States)

    Gustafsson, Mika; Hörnquist, Michael; Lombardi, Anna

    2006-07-01

    The issue of partitioning a network into communities has attracted a great deal of attention recently. Most authors seem to equate this issue with the one of finding the maximum value of the modularity, as defined by Newman. Since the problem formulated this way is believed to be NP-hard, most effort has gone into the construction of search algorithms, and less to the question of other measures of community structures, similarities between various partitionings and the validation with respect to external information. Here we concentrate on a class of computer generated networks and on three well-studied real networks which constitute a bench-mark for network studies; the karate club, the US college football teams and a gene network of yeast. We utilize some standard ways of clustering data (originally not designed for finding community structures in networks) and show that these classical methods sometimes outperform the newer ones. We discuss various measures of the strength of the modular structure, and show by examples features and drawbacks. Further, we compare different partitions by applying some graph-theoretic concepts of distance, which indicate that one of the quality measures of the degree of modularity corresponds quite well with the distance from the true partition. Finally, we introduce a way to validate the partitionings with respect to external data when the nodes are classified but the network structure is unknown. This is here possible since we know everything of the computer generated networks, as well as the historical answer to how the karate club and the football teams are partitioned in reality. The partitioning of the gene network is validated by use of the Gene Ontology database, where we show that a community in general corresponds to a biological process.

  15. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  16. Modeling semiflexible polymer networks

    OpenAIRE

    Broedersz, Chase P.; MacKintosh, Fred C.

    2014-01-01

    Here, we provide an overview of theoretical approaches to semiflexible polymers and their networks. Such semiflexible polymers have large bending rigidities that can compete with the entropic tendency of a chain to crumple up into a random coil. Many studies on semiflexible polymers and their assemblies have been motivated by their importance in biology. Indeed, crosslinked networks of semiflexible polymers form a major structural component of tissue and living cells. Reconstituted networks o...

  17. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  18. Validation of Bosch' Mobile Communication NetworkArchitecture with SPIN

    NARCIS (Netherlands)

    Ruys, T.C.; Langerak, Romanus

    This paper discusses validation projects carried out for the Mobile Communication Division of Robert Bosch GmbH. We verified parts of their Mobile Communication Network (MCNet), a communication system which is to be used in infotainment systems of future cars. The protocols of the MCNet have been

  19. Complex Networks in Psychological Models

    Science.gov (United States)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  20. An Enhanced Global Precipitation Measurement (GPM) Validation Network Prototype

    Science.gov (United States)

    Schwaller, Matthew R.; Morris, K. Robert

    2009-01-01

    A Validation Network (VN) prototype is currently underway that compares data from the Precipitation Radar (PR) instrument on NASA's Tropical Rainfall Measuring Mission (TRMM) satellite to similar measurements from the U.S. national network of operational weather radars. This prototype is being conducted as part of the ground validation activities of NASA's Global Precipitation Measurement (GPM) mission. GPM will carry a Dual-frequency Precipitation Radar instrument (DPR) with similar characteristics to the TRMM PR. The purpose of the VN is to identify and resolve significant discrepancies between the U.S. national network of ground radar (GR) observations and satellite observations. The ultimate goal of such comparisons is to understand and resolve the first order variability and bias of precipitation retrievals in different meteorological/hydrological regimes at large scales. This paper presents a description of, and results from, an improved algorithm for volume matching and comparison of PR and ground radar observations.

  1. Developing Personal Network Business Models

    DEFF Research Database (Denmark)

    Saugstrup, Dan; Henten, Anders

    2006-01-01

    on the 'state of the art' in the field of business modeling. Furthermore, the paper suggests three generic business models for PNs: a service oriented model, a self-organized model, and a combination model. Finally, examples of relevant services and applications in relation to three different cases......The aim of the paper is to examine the issue of business modeling in relation to personal networks, PNs. The paper builds on research performed on business models in the EU 1ST MAGNET1 project (My personal Adaptive Global NET). The paper presents the Personal Network concept and briefly reports...... are presented and analyzed in light of business modeling of PN....

  2. A model of coauthorship networks

    Science.gov (United States)

    Zhou, Guochang; Li, Jianping; Xie, Zonglin

    2017-10-01

    A natural way of representing the coauthorship of authors is to use a generalization of graphs known as hypergraphs. A random geometric hypergraph model is proposed here to model coauthorship networks, which is generated by placing nodes on a region of Euclidean space randomly and uniformly, and connecting some nodes if the nodes satisfy particular geometric conditions. Two kinds of geometric conditions are designed to model the collaboration patterns of academic authorities and basic researches respectively. The conditions give geometric expressions of two causes of coauthorship: the authority and similarity of authors. By simulation and calculus, we show that the forepart of the degree distribution of the network generated by the model is mixture Poissonian, and the tail is power-law, which are similar to these of some coauthorship networks. Further, we show more similarities between the generated network and real coauthorship networks: the distribution of cardinalities of hyperedges, high clustering coefficient, assortativity, and small-world property

  3. A Model of Network Porosity

    Science.gov (United States)

    2016-11-09

    standpoint remains more of an art than a science . Even when well executed, the ongoing evolution of the network may violate initial, security-critical design...from a security standpoint remains more of an art than a science . Even when well executed, the ongoing evolution of the network may violate initial...is outside the scope of this paper. As such, we focus on event probabilities. The output of the network porosity model is a stream of timestamped

  4. On traffic modelling in GPRS networks

    DEFF Research Database (Denmark)

    Madsen, Tatiana Kozlova; Schwefel, Hans-Peter; Prasad, Ramjee

    2005-01-01

    Optimal design and dimensioning of wireless data networks, such as GPRS, requires the knowledge of traffic characteristics of different data services. This paper presents an in-detail analysis of an IP-level traffic measurements taken in an operational GPRS network. The data measurements reported...... here are done at the Gi interface. The aim of this paper is to reveal some key statistics of GPRS data applications and to validate if the existing traffic models can adequately describe traffic volume and inter-arrival time distribution for different services. Additionally, we present a method of user...

  5. Predicting and validating protein interactions using network structure.

    Directory of Open Access Journals (Sweden)

    Pao-Yang Chen

    2008-07-01

    Full Text Available Protein interactions play a vital part in the function of a cell. As experimental techniques for detection and validation of protein interactions are time consuming, there is a need for computational methods for this task. Protein interactions appear to form a network with a relatively high degree of local clustering. In this paper we exploit this clustering by suggesting a score based on triplets of observed protein interactions. The score utilises both protein characteristics and network properties. Our score based on triplets is shown to complement existing techniques for predicting protein interactions, outperforming them on data sets which display a high degree of clustering. The predicted interactions score highly against test measures for accuracy. Compared to a similar score derived from pairwise interactions only, the triplet score displays higher sensitivity and specificity. By looking at specific examples, we show how an experimental set of interactions can be enriched and validated. As part of this work we also examine the effect of different prior databases upon the accuracy of prediction and find that the interactions from the same kingdom give better results than from across kingdoms, suggesting that there may be fundamental differences between the networks. These results all emphasize that network structure is important and helps in the accurate prediction of protein interactions. The protein interaction data set and the program used in our analysis, and a list of predictions and validations, are available at http://www.stats.ox.ac.uk/bioinfo/resources/PredictingInteractions.

  6. Telecommunications network modelling, planning and design

    CERN Document Server

    Evans, Sharon

    2003-01-01

    Telecommunication Network Modelling, Planning and Design addresses sophisticated modelling techniques from the perspective of the communications industry and covers some of the major issues facing telecommunications network engineers and managers today. Topics covered include network planning for transmission systems, modelling of SDH transport network structures and telecommunications network design and performance modelling, as well as network costs and ROI modelling and QoS in 3G networks.

  7. Real-world validation of SHAC models

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, L.

    1970-01-01

    A statistical approach is proposed to validation of SHAC models. It includes a definition of validation, an explanation of its purposes, and a description of the statistical aspects of experimental design. It proposes a study to validate design codes with statistical samples of real-world systems. Also included is a summary of present SHAC validation methodologies and studies as well as recommendations for future activity.

  8. Campus network security model study

    Science.gov (United States)

    Zhang, Yong-ku; Song, Li-ren

    2011-12-01

    Campus network security is growing importance, Design a very effective defense hacker attacks, viruses, data theft, and internal defense system, is the focus of the study in this paper. This paper compared the firewall; IDS based on the integrated, then design of a campus network security model, and detail the specific implementation principle.

  9. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  10. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  11. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  12. Modeling semiflexible polymer networks

    NARCIS (Netherlands)

    Broedersz, C.P.; MacKintosh, F.C.

    2014-01-01

    This is an overview of theoretical approaches to semiflexible polymers and their networks. Such semiflexible polymers have large bending rigidities that can compete with the entropic tendency of a chain to crumple up into a random coil. Many studies on semiflexible polymers and their assemblies have

  13. Genotet: An Interactive Web-based Visual Exploration Framework to Support Validation of Gene Regulatory Networks.

    Science.gov (United States)

    Yu, Bowen; Doraiswamy, Harish; Chen, Xi; Miraldi, Emily; Arrieta-Ortiz, Mario Luis; Hafemeister, Christoph; Madar, Aviv; Bonneau, Richard; Silva, Cláudio T

    2014-12-01

    Elucidation of transcriptional regulatory networks (TRNs) is a fundamental goal in biology, and one of the most important components of TRNs are transcription factors (TFs), proteins that specifically bind to gene promoter and enhancer regions to alter target gene expression patterns. Advances in genomic technologies as well as advances in computational biology have led to multiple large regulatory network models (directed networks) each with a large corpus of supporting data and gene-annotation. There are multiple possible biological motivations for exploring large regulatory network models, including: validating TF-target gene relationships, figuring out co-regulation patterns, and exploring the coordination of cell processes in response to changes in cell state or environment. Here we focus on queries aimed at validating regulatory network models, and on coordinating visualization of primary data and directed weighted gene regulatory networks. The large size of both the network models and the primary data can make such coordinated queries cumbersome with existing tools and, in particular, inhibits the sharing of results between collaborators. In this work, we develop and demonstrate a web-based framework for coordinating visualization and exploration of expression data (RNA-seq, microarray), network models and gene-binding data (ChIP-seq). Using specialized data structures and multiple coordinated views, we design an efficient querying model to support interactive analysis of the data. Finally, we show the effectiveness of our framework through case studies for the mouse immune system (a dataset focused on a subset of key cellular functions) and a model bacteria (a small genome with high data-completeness).

  14. Delay and Disruption Tolerant Networking MACHETE Model

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.; Gao, Jay L.

    2011-01-01

    To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity

  15. An Efficient Multitask Scheduling Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hongsheng Yin

    2014-01-01

    Full Text Available The sensor nodes of multitask wireless network are constrained in performance-driven computation. Theoretical studies on the data processing model of wireless sensor nodes suggest satisfying the requirements of high qualities of service (QoS of multiple application networks, thus improving the efficiency of network. In this paper, we present the priority based data processing model for multitask sensor nodes in the architecture of multitask wireless sensor network. The proposed model is deduced with the M/M/1 queuing model based on the queuing theory where the average delay of data packets passing by sensor nodes is estimated. The model is validated with the real data from the Huoerxinhe Coal Mine. By applying the proposed priority based data processing model in the multitask wireless sensor network, the average delay of data packets in a sensor nodes is reduced nearly to 50%. The simulation results show that the proposed model can improve the throughput of network efficiently.

  16. Mobility Model for Tactical Networks

    Science.gov (United States)

    Rollo, Milan; Komenda, Antonín

    In this paper a synthetic mobility model which represents behavior and movement pattern of heterogeneous units in disaster relief and battlefield scenarios is proposed. These operations usually take place in environment without preexisting communication infrastructure and units thus have to be connected by wireless communication network. Units cooperate to fulfill common tasks and communication network has to serve high amount of communication requests, especially data, voice and video stream transmissions. To verify features of topology control, routing and interaction protocols software simulations are usually used, because of their scalability, repeatability and speed. Behavior of all these protocols relies on the mobility model of the network nodes, which has to resemble real-life movement pattern. Proposed mobility model is goal-driven and provides support for various types of units, group mobility and realistic environment model with obstacles. Basic characteristics of the mobility model like node spatial distribution and average node degree were analyzed.

  17. A improved Network Security Situation Awareness Model

    Directory of Open Access Journals (Sweden)

    Li Fangwei

    2015-08-01

    Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.

  18. On validation of multibody musculoskeletal models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; de Zee, Mark; Andersen, Michael Skipper

    2012-01-01

    This paper reviews the opportunities to validate multibody musculoskeletal models in view of the current transition of musculoskeletal modelling from a research topic to a practical simulation tool in product design, healthcare and other important applications. This transition creates a new need...... for justification that the models are adequate representations of the systems they simulate. The need for a consistent terminology and established standards is identified and knowledge from fields with a more progressed state-of-the-art in Verification and Validation is introduced. A number of practical steps...... for improvement of the validation of multibody musculoskeletal models are pointed out and directions for future research in the field are proposed. It is our hope that a more structured approach to model validation can help to improve the credibility of musculoskeletal models....

  19. Modelling freeway networks by hybrid stochastic models

    OpenAIRE

    Boel, R.; Mihaylova, L.

    2004-01-01

    Traffic flow on freeways is a nonlinear, many-particle phenomenon, with complex interactions between the vehicles. This paper presents a stochastic hybrid model of freeway traffic at a time scale and at a level of detail suitable for on-line flow estimation, for routing and ramp metering control. The model describes the evolution of continuous and discrete state variables. The freeway is considered as a network of components, each component representing a different section of the network. The...

  20. A last updating evolution model for online social networks

    Science.gov (United States)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  1. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  2. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  3. Network model of security system

    Directory of Open Access Journals (Sweden)

    Adamczyk Piotr

    2016-01-01

    Full Text Available The article presents the concept of building a network security model and its application in the process of risk analysis. It indicates the possibility of a new definition of the role of the network models in the safety analysis. Special attention was paid to the development of the use of an algorithm describing the process of identifying the assets, vulnerability and threats in a given context. The aim of the article is to present how this algorithm reduced the complexity of the problem by eliminating from the base model these components that have no links with others component and as a result and it was possible to build a real network model corresponding to reality.

  4. Validation of Mobility Simulations via Measurement Drive Tests in an Operational Network

    DEFF Research Database (Denmark)

    Gimenez, Lucas Chavarria; Barbera, Simone; Polignano, Michele

    2015-01-01

    Simulations play a key role in validating new concepts in cellular networks, since most of the features proposed and introduced into the standards are typically first studied by means of simulations. In order to increase the trustworthiness of the simulation results, proper models and settings must...... to reality. The presented study is based on drive tests measurements and explicit simulations of an operator network in the city of Aalborg (Denmark) – modelling a real 3D environment and using a commonly accepted dynamic system level simulation methodology. In short, the presented results show...

  5. A Prototype Network for Remote Sensing Validation in China

    Directory of Open Access Journals (Sweden)

    Mingguo Ma

    2015-04-01

    Full Text Available Validation is an essential and important step before the application of remote sensing products. This paper introduces a prototype of the validation network for remote sensing products in China (VRPC. The VRPC aims to improve remote sensing products at a regional scale in China. These improvements will enhance the applicability of the key remote sensing products in understanding and interpretation of typical land surface processes in China. The framework of the VRPC is introduced first, including its four basic components. Then, the basic selection principles of the observation sites are described, and the principles for the validation of the remote sensing products are established. The VRPC will be realized in stages. In the first stage, four stations that have improved remote sensing observation facilities have been incorporated according to the selection principles. Certain core observation sites have been constructed at these stations. Next the Heihe Station is introduced in detail as an example. The three levels of observation (the research base, pixel-scale validation sites, and regional coverage adopted by the Heihe Station are carefully explained. The pixel-scale validation sites with nested multi-scale observation systems in this station are the most unique feature, and these sites aim to solve some key scientific problems associated with remote sensing product validation (e.g., the scale effect and scale transformation. Multi-year of in situ measurements will ensure the high accuracy and inter-annual validity of the land products, which will provide dynamic regional monitoring and simulation capabilities in China. The observation sites of the VRPC are open, with the goal of increasing cooperation and exchange with global programs.

  6. Sensor data validation and reconstruction in water networks : a methodology and software implementation

    OpenAIRE

    García Valverde, Diego; Quevedo Casín, Joseba Jokin; Puig Cayuela, Vicenç; Cugueró Escofet, Miquel Àngel

    2014-01-01

    In this paper, a data validation and reconstruction methodology that can be applied to the sensors used for real-time monitoring in water networks is presented. On the one hand, a validation approach based on quality levels is described to detect potential invalid and missing data. On the other hand, the reconstruction strategy is based on a set of temporal and spatial models used to estimate missing/invalid data with the model estimation providing the best fit. A software tool implementing t...

  7. A Multilayer Model of Computer Networks

    OpenAIRE

    Shchurov, Andrey A.

    2015-01-01

    The fundamental concept of applying the system methodology to network analysis declares that network architecture should take into account services and applications which this network provides and supports. This work introduces a formal model of computer networks on the basis of the hierarchical multilayer networks. In turn, individual layers are represented as multiplex networks. The concept of layered networks provides conditions of top-down consistency of the model. Next, we determined the...

  8. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  9. Data modeling of network dynamics

    Science.gov (United States)

    Jaenisch, Holger M.; Handley, James W.; Faucheux, Jeffery P.; Harris, Brad

    2004-01-01

    This paper highlights Data Modeling theory and its use for text data mining as a graphical network search engine. Data Modeling is then used to create a real-time filter capable of monitoring network traffic down to the port level for unusual dynamics and changes in business as usual. This is accomplished in an unsupervised fashion without a priori knowledge of abnormal characteristics. Two novel methods for converting streaming binary data into a form amenable to graphics based search and change detection are introduced. These techniques are then successfully applied to 1999 KDD Cup network attack data log-on sessions to demonstrate that Data Modeling can detect attacks without prior training on any form of attack behavior. Finally, two new methods for data encryption using these ideas are proposed.

  10. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-11-04

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017. Published by Elsevier Inc.

  11. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  12. Thermal Network Modelling Handbook

    Science.gov (United States)

    1972-01-01

    Thermal mathematical modelling is discussed in detail. A three-fold purpose was established: (1) to acquaint the new user with the terminology and concepts used in thermal mathematical modelling, (2) to present the more experienced and occasional user with quick formulas and methods for solving everyday problems, coupled with study cases which lend insight into the relationships that exist among the various solution techniques and parameters, and (3) to begin to catalog in an orderly fashion the common formulas which may be applied to automated conversational language techniques.

  13. Innovation, Product Development, and New Business Models in Networks: How to come from case studies to a valid and operational theory

    DEFF Research Database (Denmark)

    Rasmussen, Erik Stavnsager; Jørgensen, Jacob Høj; Goduscheit, René Chester

    2007-01-01

    We have in the research project NEWGIBM (New Global ICT based Business Models) during 2005 and 2006 closely cooperated with a group of firms. The focus in the project has been development of new business models (and innovation) in close cooperation with multiple partners. These partners have been...... step is a brief presentation of the case study methodology and our research setting with three fundamental research questions regarding methodology addressed in this paper. The next part of the paper presents a distinction between out-come driven and event-driven research and makes a case...

  14. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    Abstract. In this paper, a review is presented of the various methods which are available for the purpose of performing a systematic comparison and correla- tion between two sets of vibration data. In the present case, the application of interest is in conducting this correlation process as a prelude to model correc-.

  15. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    In this paper, a review is presented of the various methods which are available for the purpose of performing a systematic comparison and correlation between two sets of vibration data. In the present case, the application of interest is in conducting this correlation process as a prelude to model correlation or updating activity ...

  16. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2017-11-02

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Network Models of Mechanical Assemblies

    Science.gov (United States)

    Whitney, Daniel E.

    Recent network research has sought to characterize complex systems with a number of statistical metrics, such as power law exponent (if any), clustering coefficient, community behavior, and degree correlation. Use of such metrics represents a choice of level of abstraction, a balance of generality and detailed accuracy. It has been noted that "social networks" consistently display clustering coefficients that are higher than those of random or generalized random networks, that they have small world properties such as short path lengths, and that they have positive degree correlations (assortative mixing). "Technological" or "non-social" networks display many of these characteristics except that they generally have negative degree correlations (disassortative mixing). [Newman 2003i] In this paper we examine network models of mechanical assemblies. Such systems are well understood functionally. We show that there is a cap on their average nodal degree and that they have negative degree correlations (disassortative mixing). We identify specific constraints arising from first principles, their structural patterns, and engineering practice that suggest why they have these properties. In addition, we note that their main "motif" is closed loops (as it is for electric and electronic circuits), a pattern that conventional network analysis does not detect but which is used by software intended to aid in the design of such systems.

  18. Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission

    Science.gov (United States)

    Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.

  19. Service entity network virtualization architecture and model

    Science.gov (United States)

    Jin, Xue-Guang; Shou, Guo-Chu; Hu, Yi-Hong; Guo, Zhi-Gang

    2017-07-01

    Communication network can be treated as a complex network carrying a variety of services and service can be treated as a network composed of functional entities. There are growing interests in multiplex service entities where individual entity and link can be used for different services simultaneously. Entities and their relationships constitute a service entity network. In this paper, we introduced a service entity network virtualization architecture including service entity network hierarchical model, service entity network model, service implementation and deployment of service entity networks. Service entity network oriented multiplex planning model were also studied and many of these multiplex models were characterized by a significant multiplex of the links or entities in different service entity network. Service entity networks were mapped onto shared physical resources by dynamic resource allocation controller. The efficiency of the proposed architecture was illustrated in a simulation environment that allows for comparative performance evaluation. The results show that, compared to traditional networking architecture, this architecture has a better performance.

  20. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  1. Polymer networks: Modeling and applications

    Science.gov (United States)

    Masoud, Hassan

    Polymer networks are an important class of materials that are ubiquitously found in natural, biological, and man-made systems. The complex mesoscale structure of these soft materials has made it difficult for researchers to fully explore their properties. In this dissertation, we introduce a coarse-grained computational model for permanently cross-linked polymer networks than can properly capture common properties of these materials. We use this model to study several practical problems involving dry and solvated networks. Specifically, we analyze the permeability and diffusivity of polymer networks under mechanical deformations, we examine the release of encapsulated solutes from microgel capsules during volume transitions, and we explore the complex tribological behavior of elastomers. Our simulations reveal that the network transport properties are defined by the network porosity and by the degree of network anisotropy due to mechanical deformations. In particular, the permeability of mechanically deformed networks can be predicted based on the alignment of network filaments that is characterized by a second order orientation tensor. Moreover, our numerical calculations demonstrate that responsive microcapsules can be effectively utilized for steady and pulsatile release of encapsulated solutes. We show that swollen gel capsules allow steady, diffusive release of nanoparticles and polymer chains, whereas gel deswelling causes burst-like discharge of solutes driven by an outward flow of the solvent initially enclosed within a shrinking capsule. We further demonstrate that this hydrodynamic release can be regulated by introducing rigid microscopic rods in the capsule interior. We also probe the effects of velocity, temperature, and normal load on the sliding of elastomers on smooth and corrugated substrates. Our friction simulations predict a bell-shaped curve for the dependence of the friction coefficient on the sliding velocity. Our simulations also illustrate

  2. Target-Centric Network Modeling

    DEFF Research Database (Denmark)

    Mitchell, Dr. William L.; Clark, Dr. Robert M.

    In Target-Centric Network Modeling: Case Studies in Analyzing Complex Intelligence Issues, authors Robert Clark and William Mitchell take an entirely new approach to teaching intelligence analysis. Unlike any other book on the market, it offers case study scenarios using actual intelligence......, and collaborative sharing in the process of creating a high-quality, actionable intelligence product. The case studies reflect the complexity of twenty-first century intelligence issues by dealing with multi-layered target networks that cut across political, economic, social, technological, and military issues....... Working through these cases, students will learn to manage and evaluate realistic intelligence accounts....

  3. CNEM: Cluster Based Network Evolution Model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2015-01-01

    Full Text Available This paper presents a network evolution model, which is based on the clustering approach. The proposed approach depicts the network evolution, which demonstrates the network formation from individual nodes to fully evolved network. An agglomerative hierarchical clustering method is applied for the evolution of network. In the paper, we present three case studies which show the evolution of the networks from the scratch. These case studies include: terrorist network of 9/11 incidents, terrorist network of WMD (Weapons Mass Destruction plot against France and a network of tweets discussing a topic. The network of 9/11 is also used for evaluation, using other social network analysis methods which show that the clusters created using the proposed model of network evolution are of good quality, thus the proposed method can be used by law enforcement agencies in order to further investigate the criminal networks

  4. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  5. Numerical model representation and validation strategies

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1997-10-01

    This paper describes model representation and validation strategies for use in numerical tools that define models in terms of topology, geometry, or topography. Examples of such tools include Computer-Assisted Engineering (CAE), Computer-Assisted Manufacturing (CAM), Finite Element Analysis (FEA), and Virtual Environment Simulation (VES) tools. These tools represent either physical objects or conceptual ideas using numerical models for the purpose of posing a question, performing a task, or generating information. Dependence on these numerical representations require that models be precise, consistent across different applications, and verifiable. This paper describes a strategy for ensuring precise, consistent, and verifiable numerical model representations in a topographic framework. The main assertion put forth is that topographic model descriptions are more appropriate for numerical applications than topological or geometrical descriptions. A topographic model verification and validation methodology is presented.

  6. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  7. Social Network Data Validity: The Example of the Social Network of Caregivers of Older Persons with Alzheimer-Type Dementia

    Science.gov (United States)

    Carpentier, Normand

    2007-01-01

    This article offers reflection on the validity of relational data such as used in social network analysis. Ongoing research on the transformation of the support network of caregivers of persons with an Alzheimer-type disease provides the data to fuel the debate on the validity of participant report. More specifically, we sought to understand the…

  8. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  9. Mathematical Modelling Plant Signalling Networks

    KAUST Repository

    Muraro, D.

    2013-01-01

    During the last two decades, molecular genetic studies and the completion of the sequencing of the Arabidopsis thaliana genome have increased knowledge of hormonal regulation in plants. These signal transduction pathways act in concert through gene regulatory and signalling networks whose main components have begun to be elucidated. Our understanding of the resulting cellular processes is hindered by the complex, and sometimes counter-intuitive, dynamics of the networks, which may be interconnected through feedback controls and cross-regulation. Mathematical modelling provides a valuable tool to investigate such dynamics and to perform in silico experiments that may not be easily carried out in a laboratory. In this article, we firstly review general methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more comprehensive modelling studies of hormonal transport and signalling in a multi-scale setting. © EDP Sciences, 2013.

  10. Global precipitation measurements for validating climate models

    Science.gov (United States)

    Tapiador, F. J.; Navarro, A.; Levizzani, V.; García-Ortega, E.; Huffman, G. J.; Kidd, C.; Kucera, P. A.; Kummerow, C. D.; Masunaga, H.; Petersen, W. A.; Roca, R.; Sánchez, J.-L.; Tao, W.-K.; Turk, F. J.

    2017-11-01

    The advent of global precipitation data sets with increasing temporal span has made it possible to use them for validating climate models. In order to fulfill the requirement of global coverage, existing products integrate satellite-derived retrievals from many sensors with direct ground observations (gauges, disdrometers, radars), which are used as reference for the satellites. While the resulting product can be deemed as the best-available source of quality validation data, awareness of the limitations of such data sets is important to avoid extracting wrong or unsubstantiated conclusions when assessing climate model abilities. This paper provides guidance on the use of precipitation data sets for climate research, including model validation and verification for improving physical parameterizations. The strengths and limitations of the data sets for climate modeling applications are presented, and a protocol for quality assurance of both observational databases and models is discussed. The paper helps elaborating the recent IPCC AR5 acknowledgment of large observational uncertainties in precipitation observations for climate model validation.

  11. Network models of frugivory and seed dispersal: Challenges and opportunities

    Science.gov (United States)

    Carlo, Tomás A.; Yang, Suann

    2011-11-01

    Network analyses have emerged as a new tool to study frugivory and seed dispersal (FSD) mutualisms because networks can model and simplify the complexity of multiple community-wide species interactions. Moreover, network theory suggests that structural properties, such as the presence of highly generalist species, are linked to the stability of mutualistic communities. However, we still lack empirical validation of network model predictions. Here we outline new research avenues to connect network models to FSD processes, and illustrate the challenges and opportunities of this tool with a field study. We hypothesized that generalist frugivores would be important for forest stability by dispersing seeds into deforested areas and initiating reforestation. We then constructed a network of plant-frugivore interactions using published data and identified the most generalist frugivores. To test the importance of generalists we measured: 1) the frequency with which frugivores moved between pasture and forest, 2) the bird-generated seed rain under perches in the pasture, and 3) the perching frequency of birds above seed traps. The generalist frugivores in the forest network were not important for seed dispersal into pastures, and thus for forest recovery, because the forest network excluded habitat heterogeneities, frugivore behavior, and movements. More research is needed to develop ways to incorporate relevant FSD processes into network models in order for these models to be more useful to community ecology and conservation. The network framework can serve to spark and renew interest in FSD and further our understanding of plant-animal communities.

  12. A soil moisture network for SMOS validation in Western Denmark

    DEFF Research Database (Denmark)

    Bircher, Simone; Skou, N.; Jensen, Karsten Høgh

    2012-01-01

    pixel (44 × 44 km), which is representative of the land surface conditions of the catchment and with minimal impact from open water (2) arrangement of three network clusters along the precipitation gradient, and (3) distribution of the stations according to respective fractions of classes representing......The Soil Moisture and Ocean Salinity Mission (SMOS) acquires surface soil moisture data of global coverage every three days. Product validation for a range of climate and environmental conditions across continents is a crucial step. For this purpose, a soil moisture and soil temperature sensor...... the prevailing environmental conditions. Overall, measured moisture and temperature patterns could be related to the respective land cover and soil conditions. Texture-dependency of the 0–5 cm soil moisture measurements was demonstrated. Regional differences in 0–5 cm soil moisture, temperature and precipitation...

  13. Energy modelling in sensor networks

    Directory of Open Access Journals (Sweden)

    D. Schmidt

    2007-06-01

    Full Text Available Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.

  14. Probabilistic logic modeling of network reliability for hybrid network architectures

    Energy Technology Data Exchange (ETDEWEB)

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-10-01

    Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.

  15. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  16. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  17. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...

  18. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    Metro became operational in autumn 2002. We observed that forecasts from the demand sub-models agree well with the data from the 2000 national travel survey, with the mode choice forecasts in particular being a good match with the observed modal split. The results of the 2000 car assignment model...... the model’s base trip matrices. Second, a dialog between researchers and the Ministry of Transport has been initiated to discuss the need to upgrade the Copenhagen model, e.g. a switching to an activity-based paradigm and improving assignment procedures.......The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...

  19. LANL*V2.0: global modeling and validation

    Directory of Open Access Journals (Sweden)

    S. Zaharia

    2011-08-01

    Full Text Available We describe in this paper the new version of LANL*, an artificial neural network (ANN for calculating the magnetic drift invariant L*. This quantity is used for modeling radiation belt dynamics and for space weather applications. We have implemented the following enhancements in the new version: (1 we have removed the limitation to geosynchronous orbit and the model can now be used for a much larger region. (2 The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005 (TS05 instead of the older model by Tsyganenko et al. (2003. We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically ΔL* * V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  20. Modeling the Dynamics of Compromised Networks

    Energy Technology Data Exchange (ETDEWEB)

    Soper, B; Merl, D M

    2011-09-12

    Accurate predictive models of compromised networks would contribute greatly to improving the effectiveness and efficiency of the detection and control of network attacks. Compartmental epidemiological models have been applied to modeling attack vectors such as viruses and worms. We extend the application of these models to capture a wider class of dynamics applicable to cyber security. By making basic assumptions regarding network topology we use multi-group epidemiological models and reaction rate kinetics to model the stochastic evolution of a compromised network. The Gillespie Algorithm is used to run simulations under a worst case scenario in which the intruder follows the basic connection rates of network traffic as a method of obfuscation.

  1. Using Model Checking to Validate AI Planner Domain Models

    Science.gov (United States)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  2. Validation of Hadronic Models in GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Koi, Tatsumi; Wright, Dennis H.; /SLAC; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; /CERN; Heikkinen, Aatos; /Helsinki Inst. of Phys.; Truscott,; Lei, Fan; /QinetiQ; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  3. Multi-mode clustering model for hierarchical wireless sensor networks

    Science.gov (United States)

    Hu, Xiangdong; Li, Yongfu; Xu, Huifen

    2017-03-01

    The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.

  4. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    Energy Technology Data Exchange (ETDEWEB)

    Mott, J.E. [Advanced Modeling Techniques Corp., Idaho Falls, ID (United States); King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D. [Argonne National Lab., Idaho Falls, ID (United States)

    1992-03-06

    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system.

  5. An overview of mesoscale aerosol processes, comparisons, and validation studies from DRAGON networks

    Science.gov (United States)

    Holben, Brent N.; Kim, Jhoon; Sano, Itaru; Mukai, Sonoyo; Eck, Thomas F.; Giles, David M.; Schafer, Joel S.; Sinyuk, Aliaksandr; Slutsker, Ilya; Smirnov, Alexander; Sorokin, Mikhail; Anderson, Bruce E.; Che, Huizheng; Choi, Myungje; Crawford, James H.; Ferrare, Richard A.; Garay, Michael J.; Jeong, Ukkyo; Kim, Mijin; Kim, Woogyung; Knox, Nichola; Li, Zhengqiang; Lim, Hwee S.; Liu, Yang; Maring, Hal; Nakata, Makiko; Pickering, Kenneth E.; Piketh, Stuart; Redemann, Jens; Reid, Jeffrey S.; Salinas, Santo; Seo, Sora; Tan, Fuyi; Tripathi, Sachchida N.; Toon, Owen B.; Xiao, Qingyang

    2018-01-01

    Over the past 24 years, the AErosol RObotic NETwork (AERONET) program has provided highly accurate remote-sensing characterization of aerosol optical and physical properties for an increasingly extensive geographic distribution including all continents and many oceanic island and coastal sites. The measurements and retrievals from the AERONET global network have addressed satellite and model validation needs very well, but there have been challenges in making comparisons to similar parameters from in situ surface and airborne measurements. Additionally, with improved spatial and temporal satellite remote sensing of aerosols, there is a need for higher spatial-resolution ground-based remote-sensing networks. An effort to address these needs resulted in a number of field campaign networks called Distributed Regional Aerosol Gridded Observation Networks (DRAGONs) that were designed to provide a database for in situ and remote-sensing comparison and analysis of local to mesoscale variability in aerosol properties. This paper describes the DRAGON deployments that will continue to contribute to the growing body of research related to meso- and microscale aerosol features and processes. The research presented in this special issue illustrates the diversity of topics that has resulted from the application of data from these networks.

  6. Modeling the construct validity of the Berlin Intelligence Structure Model

    OpenAIRE

    Süß,Heinz-Martin; Beauducel,André

    2015-01-01

    The Berlin Intelligence Structure Model is a hierarchical and faceted model which is originally based on an almost representative sample of tasks found in the literature. Therefore, the Berlin Intelligence Structure Model is an integrative model with a high degree of generality. The present paper investigates the construct validity of this model by using different confirmatory factor analysis models. The results show that the model assumptions are supported only in part by the data. Moreover,...

  7. In Silico Genome-Scale Reconstruction and Validation of the Corynebacterium glutamicum Metabolic Network

    DEFF Research Database (Denmark)

    Kjeldsen, Kjeld Raunkjær; Nielsen, J.

    2009-01-01

    A genome-scale metabolic model of the Gram-positive bacteria Corynebacterium glutamicum ATCC 13032 was constructed comprising 446 reactions and 411 metabolite, based on the annotated genome and available biochemical information. The network was analyzed using constraint based methods. The model...... was extensively validated against published flux data, and flux distribution values were found to correlate well between simulations and experiments. The split pathway of the lysine synthesis pathway of C. glutamicum was investigated, and it was found that the direct dehydrogenase variant gave a higher lysine...

  8. The Radiometric Calibration Network (RadCalNet): a Global Calibration and Validation Test Site Network

    Science.gov (United States)

    Czapla-Myers, J.; Bouvet, M.; Wenny, B. N.

    2016-12-01

    The Radiometric Calibration Network (RadCalNet) Working Group (WG) consists of national and academic groups from various countries who are involved in the radiometric calibration and validation of Earth-observing sensors. The current WG is composed of members from France, Italy, the Netherlands, the UK, the USA, and China. RadCalNet has been on the agenda of the Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) for years, and in 2014 it was formally assembled. The primary goal is to develop an SI-traceable standardized network of sites and processing protocols for the absolute radiometric calibration, Intercalibration, and validation of Earth-observing sensors. Currently, RadCalNet is composed of four instrumented test sites that are located in the USA, France, Namibia, and China. A two-year prototyping phase was used to define the architecture of RadCalNet, demonstrate the operational concept using current satellite sensors, and to provide recommendations to CEOS WGCV for the transition of RadCalNet to an operational status. The final product is planned to be a daily hyperspectral (400-2500 nm) top-of-atmosphere reflectance in 30-minute intervals for a nadir-viewing sensor at each of the four test sites. The current schedule has RadCalNet becoming operational in late 2016 or early 2017.

  9. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  10. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  11. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  12. Using AERONET to complement irradiance networks on the validation of satellite-based estimations

    Science.gov (United States)

    Oumbe, A.; Bru, H.; Ghedira, H.; Chiesa, M.; Blanc, P.; Wald, L.

    2012-12-01

    Long-term measurements of surface solar irradiance (SSI) are essential for predicting the production of solar energy conversion systems. Ground-based SSIs are also needed for validation and calibration of models which convert satellite images into down-welling irradiances. Unfortunately, well-controlled data are publicly available for only a limited number of locations, especially when it comes to beam normal irradiance (BNI). In the Middle East particularly, there is only one publicly available research-class station: the Sede Boqer station, monitored by the BSRN (Baseline Surface Radiation Network). Thus, estimations of SSIs have been so far difficult to validate in this region. Besides irradiance networks, AERONET (Aerosol Robotic network) program provides long-term and public accessible sun photometer measurements. Its main goal is to provide validation data for satellite retrievals of aerosol optical properties. Various atmospheric properties are measured: aerosol optical depth at several wavelengths, water vapor amount, Angstrom coefficients. These data can be utilized for computation of SSI in cloudless sky by means of a radiative transfer model (RTM). The appropriate conversion of AERONET atmospheric properties into irradiances would provide additional in-situ irradiance data. In this work, we select the AERONET data which are relevant for irradiance calculation, compute the direct and global irradiances using the RTM LibRadTran and validate the outcomes with nearest actual irradiance measurements. The comparisons are made in the Middle East region. At Sede Boqer where AERONET and BSRN measurements are simultaneously available, the standard-deviation obtained is only 6% for BNI and 5% for GHI (global horizontal irradiance) between the computed and the measured hourly mean irradiances (see the attached figure). When the AERONET and BSRN stations considered are 100 km away, the standard-deviation between actually measured and AERONET-derived irradiances

  13. An adaptive complex network model for brain functional networks.

    Directory of Open Access Journals (Sweden)

    Ignacio J Gomez Portillo

    Full Text Available Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution.

  14. Grand canonical validation of the bipartite international trade network

    Science.gov (United States)

    Straka, Mika J.; Caldarelli, Guido; Saracco, Fabio

    2017-08-01

    Devising strategies for economic development in a globally competitive landscape requires a solid and unbiased understanding of countries' technological advancements and similarities among export products. Both can be addressed through the bipartite representation of the International Trade Network. In this paper, we apply the recently proposed grand canonical projection algorithm to uncover country and product communities. Contrary to past endeavors, our methodology, based on information theory, creates monopartite projections in an unbiased and analytically tractable way. Single links between countries or products represent statistically significant signals, which are not accounted for by null models such as the bipartite configuration model. We find stable country communities reflecting the socioeconomic distinction in developed, newly industrialized, and developing countries. Furthermore, we observe product clusters based on the aforementioned country groups. Our analysis reveals the existence of a complicated structure in the bipartite International Trade Network: apart from the diversification of export baskets from the most basic to the most exclusive products, we observe a statistically significant signal of an export specialization mechanism towards more sophisticated products.

  15. Validation of Space Weather Models at Community Coordinated Modeling Center

    Science.gov (United States)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; hide

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  16. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  17. Neural network models of learning and categorization in multigame experiments

    Directory of Open Access Journals (Sweden)

    Davide eMarchiori

    2011-12-01

    Full Text Available Previous research has shown that regret-driven neural networks predict behavior in repeated completely mixed games remarkably well, substantially equating the performance of the most accurate established models of learning. This result prompts the question of what is the added value of modeling learning through neural networks. We submit that this modeling approach allows for models that are able to distinguish among and respond differently to different payoff structures. Moreover, the process of categorization of a game is implicitly carried out by these models, thus without the need of any external explicit theory of similarity between games. To validate our claims, we designed and ran two multigame experiments in which subjects faced, in random sequence, different instances of two completely mixed 2x2 games. Then, we tested on our experimental data two regret-driven neural network models, and compared their performance with that of other established models of learning and Nash equilibrium.

  18. Modeling gene regulatory networks: A network simplification algorithm

    Science.gov (United States)

    Ferreira, Luiz Henrique O.; de Castro, Maria Clicia S.; da Silva, Fabricio A. B.

    2016-12-01

    Boolean networks have been used for some time to model Gene Regulatory Networks (GRNs), which describe cell functions. Those models can help biologists to make predictions, prognosis and even specialized treatment when some disturb on the GRN lead to a sick condition. However, the amount of information related to a GRN can be huge, making the task of inferring its boolean network representation quite a challenge. The method shown here takes into account information about the interactome to build a network, where each node represents a protein, and uses the entropy of each node as a key to reduce the size of the network, allowing the further inferring process to focus only on the main protein hubs, the ones with most potential to interfere in overall network behavior.

  19. Modeling integrated cellular machinery using hybrid Petri-Boolean networks.

    Directory of Open Access Journals (Sweden)

    Natalie Berestovsky

    Full Text Available The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them

  20. The model of social crypto-network

    Directory of Open Access Journals (Sweden)

    Марк Миколайович Орел

    2015-06-01

    Full Text Available The article presents the theoretical model of social network with the enhanced mechanism of privacy policy. It covers the problems arising in the process of implementing the mentioned type of network. There are presented the methods of solving problems arising in the process of building the social network with privacy policy. It was built a theoretical model of social networks with enhanced information protection methods based on information and communication blocks

  1. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  2. The model of social crypto-network

    OpenAIRE

    Марк Миколайович Орел

    2015-01-01

    The article presents the theoretical model of social network with the enhanced mechanism of privacy policy. It covers the problems arising in the process of implementing the mentioned type of network. There are presented the methods of solving problems arising in the process of building the social network with privacy policy. It was built a theoretical model of social networks with enhanced information protection methods based on information and communication blocks

  3. Modeling Diagnostic Assessments with Bayesian Networks

    Science.gov (United States)

    Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego

    2007-01-01

    This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

  4. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  5. Seclazone Reactor Modeling And Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Osinga, T. [ETH-Zuerich (Switzerland); Olalde, G. [CNRS Odeillo (France); Steinfeld, A. [PSI and ETHZ (Switzerland)

    2005-03-01

    A numerical model is formulated for the SOLZINC solar chemical reactor for the production of Zn by carbothermal reduction of ZnO. The model involves solving, by the finite-volume technique, a 1D unsteady state energy equation that couples heat transfer to the chemical kinetics for a shrinking packed bed exposed to thermal radiation. Validation is accomplished by comparison with experimentally measured temperature profiles and Zn production rates as a function of time, obtained for a 5-kW solar reactor tested at PSI's solar furnace. (author)

  6. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    Science.gov (United States)

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  7. Validation of Calibrated Energy Models: Common Errors

    Directory of Open Access Journals (Sweden)

    Germán Ramos Ruiz

    2017-10-01

    Full Text Available Nowadays, there is growing interest in all the smart technologies that provide us with information and knowledge about the human environment. In the energy field, thanks to the amount of data received from smart meters and devices and the progress made in both energy software and computers, the quality of energy models is gradually improving and, hence, also the suitability of Energy Conservation Measures (ECMs. For this reason, the measurement of the accuracy of building energy models is an important task, because once the model is validated through a calibration procedure, it can be used, for example, to apply and study different strategies to reduce its energy consumption in maintaining human comfort. There are several agencies that have developed guidelines and methodologies to establish a measure of the accuracy of these models, and the most widely recognized are: ASHRAE Guideline 14-2014, the International Performance Measurement and Verification Protocol (IPMVP and the Federal Energy Management Program (FEMP. This article intends to shed light on these validation measurements (uncertainty indices by focusing on the typical mistakes made, as these errors could produce a false belief that the models used are calibrated.

  8. Building a multilevel modeling network for lipid processing systems

    DEFF Research Database (Denmark)

    Mustaffa, Azizul Azri; Díaz Tovar, Carlos Axel; Hukkerikar, Amol

    2011-01-01

    data collected from existing process plants, and application of validated models in design and analysis of unit operations; iv) the information and models developed are used as building blocks in the development of methods and tools for computer-aided synthesis and design of process flowsheets (CAFD......The aim of this work is to present the development of a computer aided multilevel modeling network for the systematic design and analysis of processes employing lipid technologies. This is achieved by decomposing the problem into four levels of modeling: i) pure component property modeling...... and a lipid-database of collected experimental data from industry and generated data from validated predictive property models, as well as modeling tools for fast adoption-analysis of property prediction models; ii) modeling of phase behavior of relevant lipid mixtures using the UNIFAC-CI model, development...

  9. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  10. Bayes factor of model selection validates FLMP.

    Science.gov (United States)

    Massaro, D W; Cohen, M M; Campbell, C S; Rodriguez, T

    2001-03-01

    The fuzzy logical model of perception (FLMP; Massaro, 1998) has been extremely successful at describing performance across a wide range of ecological domains as well as for a broad spectrum of individuals. An important issue is whether this descriptive ability is theoretically informative or whether it simply reflects the model's ability to describe a wider range of possible outcomes. Previous tests and contrasts of this model with others have been adjudicated on the basis of both a root mean square deviation (RMSD) for goodness-of-fit and an observed RMSD relative to a benchmark RMSD if the model was indeed correct. We extend the model evaluation by another technique called Bayes factor (Kass & Raftery, 1995; Myung & Pitt, 1997). The FLMP maintains its significant descriptive advantage with this new criterion. In a series of simulations, the RMSD also accurately recovers the correct model under actual experimental conditions. When additional variability was added to the results, the models continued to be recoverable. In addition to its descriptive accuracy, RMSD should not be ignored in model testing because it can be justified theoretically and provides a direct and meaningful index of goodness-of-fit. We also make the case for the necessity of free parameters in model testing. Finally, using Newton's law of universal gravitation as an analogy, we argue that it might not be valid to expect a model's fit to be invariant across the whole range of possible parameter values for the model. We advocate that model selection should be analogous to perceptual judgment, which is characterized by the optimal use of multiple sources of information (e.g., the FLMP). Conclusions about models should be based on several selection criteria.

  11. Object Oriented Modeling Of Social Networks

    NARCIS (Netherlands)

    Zeggelink, Evelien P.H.; Oosten, Reinier van; Stokman, Frans N.

    1996-01-01

    The aim of this paper is to explain principles of object oriented modeling in the scope of modeling dynamic social networks. As such, the approach of object oriented modeling is advocated within the field of organizational research that focuses on networks. We provide a brief introduction into the

  12. Bayesian estimation of the network autocorrelation model

    NARCIS (Netherlands)

    Dittrich, D.; Leenders, R.T.A.J.; Mulder, J.

    2017-01-01

    The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of

  13. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  14. A Simplified Network Model for Travel Time Reliability Analysis in a Road Network

    Directory of Open Access Journals (Sweden)

    Kenetsu Uchida

    2017-01-01

    Full Text Available This paper proposes a simplified network model which analyzes travel time reliability in a road network. A risk-averse driver is assumed in the simplified model. The risk-averse driver chooses a path by taking into account both a path travel time variance and a mean path travel time. The uncertainty addressed in this model is that of traffic flows (i.e., stochastic demand flows. In the simplified network model, the path travel time variance is not calculated by considering all travel time covariance between two links in the network. The path travel time variance is calculated by considering all travel time covariance between two adjacent links in the network. Numerical experiments are carried out to illustrate the applicability and validity of the proposed model. The experiments introduce the path choice behavior of a risk-neutral driver and several types of risk-averse drivers. It is shown that the mean link flows calculated by introducing the risk-neutral driver differ as a whole from those calculated by introducing several types of risk-averse drivers. It is also shown that the mean link flows calculated by the simplified network model are almost the same as the flows calculated by using the exact path travel time variance.

  15. Modeling data throughput on communication networks

    Energy Technology Data Exchange (ETDEWEB)

    Eldridge, J.M.

    1993-11-01

    New challenges in high performance computing and communications are driving the need for fast, geographically distributed networks. Applications such as modeling physical phenomena, interactive visualization, large data set transfers, and distributed supercomputing require high performance networking [St89][Ra92][Ca92]. One measure of a communication network`s performance is the time it takes to complete a task -- such as transferring a data file or displaying a graphics image on a remote monitor. Throughput, defined as the ratio of the number of useful data bits transmitted per the time required to transmit those bits, is a useful gauge of how well a communication system meets this performance measure. This paper develops and describes an analytical model of throughput. The model is a tool network designers can use to predict network throughput. It also provides insight into those parts of the network that act as a performance bottleneck.

  16. [Catalonia's primary healthcare accreditation model: a valid model].

    Science.gov (United States)

    Davins, Josep; Gens, Montserrat; Pareja, Clara; Guzmán, Ramón; Marquet, Roser; Vallès, Roser

    2014-07-01

    There are few experiences of accreditation models validated by primary care teams (EAP). The aim of this study was to detail the process of design, development, and subsequent validation of the consensus EAP accreditation model of Catalonia. An Operating Committee of the Health Department of Catalonia revised models proposed by the European Foundation for Quality Management, the Joint Commission International and the Institut Català de la Salut and proposed 628 essential standards to the technical group (25 experts in primary care and quality of care), to establish consensus standards. The consensus document was piloted in 30 EAP for the purpose of validating the contents, testing standards and identifying evidence. Finally, we did a survey to assess acceptance and validation of the document. The Technical Group agreed on a total of 414 essential standards. The pilot selected a total of 379. Mean compliance with the standards of the final document in the 30 EAP was 70.4%. The standards results were the worst fulfilment percentage. The survey target that 83% of the EAP found it useful and 78% found the content of the accreditation manual suitable as a tool to assess the quality of the EAP, and identify opportunities for improvement. On the downside they highlighted its complexity and laboriousness. We have a model that fits the reality of the EAP, and covers all relevant issues for the functioning of an excellent EAP. The model developed in Catalonia is a model for easy understanding. Copyright © 2014. Published by Elsevier Espana.

  17. Assessment model validity document FARF31

    Energy Technology Data Exchange (ETDEWEB)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria [Kemakta Konsult AB, Stockholm (Sweden)

    2004-08-01

    The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

  18. Using consensus bayesian network to model the reactive oxygen species regulatory pathway.

    Directory of Open Access Journals (Sweden)

    Liangdong Hu

    Full Text Available Bayesian network is one of the most successful graph models for representing the reactive oxygen species regulatory pathway. With the increasing number of microarray measurements, it is possible to construct the bayesian network from microarray data directly. Although large numbers of bayesian network learning algorithms have been developed, when applying them to learn bayesian networks from microarray data, the accuracies are low due to that the databases they used to learn bayesian networks contain too few microarray data. In this paper, we propose a consensus bayesian network which is constructed by combining bayesian networks from relevant literatures and bayesian networks learned from microarray data. It would have a higher accuracy than the bayesian networks learned from one database. In the experiment, we validated the bayesian network combination algorithm on several classic machine learning databases and used the consensus bayesian network to model the Escherichia coli's ROS pathway.

  19. Using consensus bayesian network to model the reactive oxygen species regulatory pathway.

    Science.gov (United States)

    Hu, Liangdong; Wang, Limin

    2013-01-01

    Bayesian network is one of the most successful graph models for representing the reactive oxygen species regulatory pathway. With the increasing number of microarray measurements, it is possible to construct the bayesian network from microarray data directly. Although large numbers of bayesian network learning algorithms have been developed, when applying them to learn bayesian networks from microarray data, the accuracies are low due to that the databases they used to learn bayesian networks contain too few microarray data. In this paper, we propose a consensus bayesian network which is constructed by combining bayesian networks from relevant literatures and bayesian networks learned from microarray data. It would have a higher accuracy than the bayesian networks learned from one database. In the experiment, we validated the bayesian network combination algorithm on several classic machine learning databases and used the consensus bayesian network to model the Escherichia coli's ROS pathway.

  20. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  1. The meaning and validation of social support networks for close family of persons with advanced cancer

    Directory of Open Access Journals (Sweden)

    Sjolander Catarina

    2012-09-01

    Full Text Available Abstract Background To strengthen the mental well-being of close family of persons newly diagnosed as having cancer, it is necessary to acquire a greater understanding of their experiences of social support networks, so as to better assess what resources are available to them from such networks and what professional measures are required. The main aim of the present study was to explore the meaning of these networks for close family of adult persons in the early stage of treatment for advanced lung or gastrointestinal cancer. An additional aim was to validate the study’s empirical findings by means of the Finfgeld-Connett conceptual model for social support. The intention was to investigate whether these findings were in accordance with previous research in nursing. Methods Seventeen family members with a relative who 8–14 weeks earlier had been diagnosed as having lung or gastrointestinal cancer were interviewed. The data were subjected to qualitative latent content analysis and validated by means of identifying antecedents and critical attributes. Results The meaning or main attribute of the social support network was expressed by the theme Confirmation through togetherness, based on six subthemes covering emotional and, to a lesser extent, instrumental support. Confirmation through togetherness derived principally from information, understanding, encouragement, involvement and spiritual community. Three subthemes were identified as the antecedents to social support: Need of support, Desire for a deeper relationship with relatives, Network to turn to. Social support involves reciprocal exchange of verbal and non-verbal information provided mainly by lay persons. Conclusions The study provides knowledge of the antecedents and attributes of social support networks, particularly from the perspective of close family of adult persons with advanced lung or gastrointestinal cancer. There is a need for measurement instruments that could

  2. Settings in Social Networks : a Measurement Model

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A.B.

    2003-01-01

    A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive

  3. Settings in social networks : A measurement model

    NARCIS (Netherlands)

    Schweinberger, M; Snijders, TAB

    2003-01-01

    A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive

  4. Spinal Cord Injury Model System Information Network

    Science.gov (United States)

    ... the UAB-SCIMS Contact the UAB-SCIMS UAB Spinal Cord Injury Model System Newly Injured Health Daily Living Consumer ... Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network ...

  5. Radio Channel Modeling in Body Area Networks

    NARCIS (Netherlands)

    An, L.; Bentum, Marinus Jan; Meijerink, Arjan; Scanlon, W.G.

    2009-01-01

    A body area network (BAN) is a network of bodyworn or implanted electronic devices, including wireless sensors which can monitor body parameters or to de- tect movements. One of the big challenges in BANs is the propagation channel modeling. Channel models can be used to understand wave propagation

  6. Radio channel modeling in body area networks

    NARCIS (Netherlands)

    An, L.; Bentum, Marinus Jan; Meijerink, Arjan; Scanlon, W.G.

    2010-01-01

    A body area network (BAN) is a network of bodyworn or implanted electronic devices, including wireless sensors which can monitor body parameters or to detect movements. One of the big challenges in BANs is the propagation channel modeling. Channel models can be used to understand wave propagation in

  7. Network interconnections: an architectural reference model

    NARCIS (Netherlands)

    Butscher, B.; Lenzini, L.; Morling, R.; Vissers, C.A.; Popescu-Zeletin, R.; van Sinderen, Marten J.; Heger, D.; Krueger, G.; Spaniol, O.; Zorn, W.

    1985-01-01

    One of the major problems in understanding the different approaches in interconnecting networks of different technologies is the lack of reference to a general model. The paper develops the rationales for a reference model of network interconnection and focuses on the architectural implications for

  8. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  9. Learning Bayesian Network Model Structure from Data

    National Research Council Canada - National Science Library

    Margaritis, Dimitris

    2003-01-01

    In this thesis I address the important problem of the determination of the structure of directed statistical models, with the widely used class of Bayesian network models as a concrete vehicle of my ideas...

  10. Modeling Marine Electromagnetic Survey with Radial Basis Function Networks

    Directory of Open Access Journals (Sweden)

    Agus Arif

    2014-11-01

    Full Text Available A marine electromagnetic survey is an engineering endeavour to discover the location and dimension of a hydrocarbon layer under an ocean floor. In this kind of survey, an array of electric and magnetic receivers are located on the sea floor and record the scattered, refracted and reflected electromagnetic wave, which has been transmitted by an electric dipole antenna towed by a vessel. The data recorded in receivers must be processed and further analysed to estimate the hydrocarbon location and dimension. To conduct those analyses successfuly, a radial basis function (RBF network could be employed to become a forward model of the input-output relationship of the data from a marine electromagnetic survey. This type of neural networks is working based on distances between its inputs and predetermined centres of some basis functions. A previous research had been conducted to model the same marine electromagnetic survey using another type of neural networks, which is a multi layer perceptron (MLP network. By comparing their validation and training performances (mean-squared errors and correlation coefficients, it is concluded that, in this case, the MLP network is comparatively better than the RBF network[1].[1] This manuscript is an extended version of our previous paper, entitled Radial Basis Function Networks for Modeling Marine Electromagnetic Survey, which had been presented on 2011 International Conference on Electrical Engineering and Informatics, 17-19 July 2011, Bandung, Indonesia.

  11. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  12. Network models in economics and finance

    CERN Document Server

    Pardalos, Panos; Rassias, Themistocles

    2014-01-01

    Using network models to investigate the interconnectivity in modern economic systems allows researchers to better understand and explain some economic phenomena. This volume presents contributions by known experts and active researchers in economic and financial network modeling. Readers are provided with an understanding of the latest advances in network analysis as applied to economics, finance, corporate governance, and investments. Moreover, recent advances in market network analysis  that focus on influential techniques for market graph analysis are also examined. Young researchers will find this volume particularly useful in facilitating their introduction to this new and fascinating field. Professionals in economics, financial management, various technologies, and network analysis, will find the network models presented in this book beneficial in analyzing the interconnectivity in modern economic systems.

  13. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex......A complex network is a systems in which a discrete set of units interact in a quantifiable manner. Representing systems as complex networks have become increasingly popular in a variety of scientific fields including biology, social sciences and economics. Parallel to this development complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  14. A Network Formation Model Based on Subgraphs

    CERN Document Server

    Chandrasekhar, Arun

    2016-01-01

    We develop a new class of random-graph models for the statistical estimation of network formation that allow for substantial correlation in links. Various subgraphs (e.g., links, triangles, cliques, stars) are generated and their union results in a network. We provide estimation techniques for recovering the rates at which the underlying subgraphs were formed. We illustrate the models via a series of applications including testing for incentives to form cross-caste relationships in rural India, testing to see whether network structure is used to enforce risk-sharing, testing as to whether networks change in response to a community's exposure to microcredit, and show that these models significantly outperform stochastic block models in matching observed network characteristics. We also establish asymptotic properties of the models and various estimators, which requires proving a new Central Limit Theorem for correlated random variables.

  15. Validating Farmers' Indigenous Social Networks for Local Seed Supply in Central Rift Valley of Ethiopia.

    Science.gov (United States)

    Seboka, B.; Deressa, A.

    2000-01-01

    Indigenous social networks of Ethiopian farmers participate in seed exchange based on mutual interdependence and trust. A government-imposed extension program must validate the role of local seed systems in developing a national seed industry. (SK)

  16. Validating Farmers' Indigenous Social Networks for Local Seed Supply in Central Rift Valley of Ethiopia

    NARCIS (Netherlands)

    Seboka, B.; Deressa, A.

    2000-01-01

    Indigenous social networks of Ethiopian farmers participate in seed exchange based on mutual interdependence and trust. A government-imposed extension program must validate the role of local seed systems in developing a national seed industry

  17. Feature selection for anomaly–based network intrusion detection using cluster validity indices

    CSIR Research Space (South Africa)

    Naidoo, Tyrone

    2015-09-01

    Full Text Available data, which is rarely available in operational networks. It uses normalized cluster validity indices as an objective function that is optimized over the search space of candidate feature subsets via a genetic algorithm. Feature sets produced...

  18. Modeling and Simulation of Handover Scheme in Integrated EPON-WiMAX Networks

    DEFF Research Database (Denmark)

    Yan, Ying; Dittmann, Lars

    2011-01-01

    In this paper, we tackle the seamless handover problem in integrated optical wireless networks. Our model applies for the convergence network of EPON and WiMAX and a mobilityaware signaling protocol is proposed. The proposed handover scheme, Integrated Mobility Management Scheme (IMMS), is assisted...... by enhancing the traditional MPCP signaling protocol, which cooperatively collects mobility information from the front-end wireless network and makes centralized bandwidth allocation decisions in the backhaul optical network. The integrated network architecture and the joint handover scheme are simulated using...... OPNET modeler. Results show validation of the protocol, i.e., integrated handover scheme gains better network performances....

  19. Gossip spread in social network Models

    Science.gov (United States)

    Johansson, Tobias

    2017-04-01

    Gossip almost inevitably arises in real social networks. In this article we investigate the relationship between the number of friends of a person and limits on how far gossip about that person can spread in the network. How far gossip travels in a network depends on two sets of factors: (a) factors determining gossip transmission from one person to the next and (b) factors determining network topology. For a simple model where gossip is spread among people who know the victim it is known that a standard scale-free network model produces a non-monotonic relationship between number of friends and expected relative spread of gossip, a pattern that is also observed in real networks (Lind et al., 2007). Here, we study gossip spread in two social network models (Toivonen et al., 2006; Vázquez, 2003) by exploring the parameter space of both models and fitting them to a real Facebook data set. Both models can produce the non-monotonic relationship of real networks more accurately than a standard scale-free model while also exhibiting more realistic variability in gossip spread. Of the two models, the one given in Vázquez (2003) best captures both the expected values and variability of gossip spread.

  20. Synergistic effects in threshold models on networks

    Science.gov (United States)

    Juul, Jonas S.; Porter, Mason A.

    2018-01-01

    Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can—depending on a parameter—either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.

  1. Optimized null model for protein structure networks.

    Science.gov (United States)

    Milenković, Tijana; Filippis, Ioannis; Lappe, Michael; Przulj, Natasa

    2009-06-26

    Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs) as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model for RIGs, by

  2. Optimized null model for protein structure networks.

    Directory of Open Access Journals (Sweden)

    Tijana Milenković

    Full Text Available Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model

  3. Towards Reproducible Descriptions of Neuronal Network Models

    Science.gov (United States)

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  4. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  5. An information search model for online social Networks - MOBIRSE

    Directory of Open Access Journals (Sweden)

    J. A. Astaiza

    2015-12-01

    Full Text Available Online Social Networks (OSNs have been gaining great importance among Internet users in recent years. These are sites where it is possible to meet people, publish, and share content in a way that is both easy and free of charge. As a result, the volume of information contained in these websites has grown exponentially, and web search has consequently become an important tool for users to easily find information relevant to their social networking objectives. Making use of ontologies and user profiles can make these searches more effective. This article presents a model for Information Retrieval in OSNs (MOBIRSE based on user profile and ontologies which aims to improve the relevance of retrieved information on these websites. The social network Facebook was chosen for a case study and as the instance for the proposed model. The model was validated using measures such as At-k Precision and Kappa statistics, to assess its efficiency.

  6. GPM GROUND VALIDATION EARTH NETWORKS TOTAL LIGHTNING NETWORK (ENTLN) MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Earth Networks Total Lightning Network (ENTLN) is an integrated in-cloud (IC) lightning and cloud-to-ground (CG) detection network deployed on a global basis...

  7. Improving Sample Estimate Reliability and Validity with Linked Ego Networks

    CERN Document Server

    Lu, Xin

    2012-01-01

    Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...

  8. Characterization and Modeling of Network Traffic

    DEFF Research Database (Denmark)

    Shawky, Ahmed; Bergheim, Hans; Ragnarsson, Olafur

    2011-01-01

    This paper attempts to characterize and model backbone network traffic, using a small number of statistics. In order to reduce cost and processing power associated with traffic analysis. The parameters affecting the behaviour of network traffic are investigated and the choice is that inter......-arrival time, IP addresses, port numbers and transport protocol are the only necessary parameters to model network traffic behaviour. In order to recreate this behaviour, a complex model is needed which is able to recreate traffic behaviour based on a set of statistics calculated from the parameters values....... The model investigates the traffic generation mechanisms, and grouping traffic into flows and applications....

  9. Modeling, Optimization & Control of Hydraulic Networks

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat

    2014-01-01

    in water network is pressure management. By reducing the pressure in the water network, the leakage can be reduced significantly. Also it reduces the amount of energy consumption in water networks. The primary purpose of this work is to develop control algorithms for pressure control in water supply....... The nonlinear network model is derived based on the circuit theory. A suitable projection is used to reduce the state vector and to express the model in standard state-space form. Then, the controllability of nonlinear nonaffine hydraulic networks is studied. The Lie algebra-based controllability matrix is used...... to solve nonlinear optimal control problems. In the water supply system model, the hydraulic resistance of the valve is estimated by real data and it is considered to be a disturbance. The disturbance in our system is updated every 24 hours based on the amount of water usage by consumers every day. Model...

  10. Citizen science networks in natural history and the collective validation of biodiversity data.

    Science.gov (United States)

    Turnhout, Esther; Lawrence, Anna; Turnhout, Sander

    2016-06-01

    Biodiversity data are in increasing demand to inform policy and management. A substantial portion of these data is generated in citizen science networks. To ensure the quality of biodiversity data, standards and criteria for validation have been put in place. We used interviews and document analysis from the United Kingdom and The Netherlands to examine how data validation serves as a point of connection between the diverse people and practices in natural history citizen science networks. We found that rather than a unidirectional imposition of standards, validation was performed collectively. Specifically, it was enacted in ongoing circulations of biodiversity records between recorders and validators as they jointly negotiated the biodiversity that was observed and the validity of the records. These collective validation practices contributed to the citizen science character or natural history networks and tied these networks together. However, when biodiversity records were included in biodiversity-information initiatives on different policy levels and scales, the circulation of records diminished. These initiatives took on a more extractive mode of data use. Validation ceased to be collective with important consequences for the natural history networks involved and citizen science more generally. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  11. Methods and procedures for the verification and validation of artificial neural networks

    CERN Document Server

    Taylor, Brian J

    2006-01-01

    Neural networks are members of a class of software that have the potential to enable intelligent computational systems capable of simulating characteristics of biological thinking and learning. This volume introduces some of the methods and techniques used for the verification and validation of neural networks and adaptive systems.

  12. A network model of the interbank market

    Science.gov (United States)

    Li, Shouwei; He, Jianmin; Zhuang, Yaming

    2010-12-01

    This work introduces a network model of an interbank market based on interbank credit lending relationships. It generates some network features identified through empirical analysis. The critical issue to construct an interbank network is to decide the edges among banks, which is realized in this paper based on the interbank’s degree of trust. Through simulation analysis of the interbank network model, some typical structural features are identified in our interbank network, which are also proved to exist in real interbank networks. They are namely, a low clustering coefficient and a relatively short average path length, community structures, and a two-power-law distribution of out-degree and in-degree.

  13. Model for Microcirculation Transportation Network Design

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2012-01-01

    Full Text Available The idea of microcirculation transportation was proposed to shunt heavy traffic on arterial roads through branch roads. The optimization model for designing micro-circulation transportation network was developed to pick out branch roads as traffic-shunting channels and determine their required capacity, trying to minimize the total reconstruction expense and land occupancy subject to saturation and reconstruction space constraints, while accounting for the route choice behaviour of network users. Since micro-circulation transportation network design problem includes both discrete and continuous variables, a discretization method was developed to convert two groups of variables (discrete variables and continuous variables into one group of new discrete variables, transforming the mixed network design problem into a new kind of discrete network design problem with multiple values. The genetic algorithm was proposed to solve the new discrete network design problem. Finally a numerical example demonstrated the efficiency of the model and algorithm.

  14. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  15. Modelling of virtual production networks

    Directory of Open Access Journals (Sweden)

    2011-03-01

    Full Text Available Nowadays many companies, especially small and medium-sized enterprises (SMEs, specialize in a limited field of production. It requires forming virtual production networks of cooperating enterprises to manufacture better, faster and cheaper. Apart from that, some production orders cannot be realized, because there is not a company of sufficient production potential. In this case the virtual production networks of cooperating companies can realize these production orders. These networks have larger production capacity and many different resources. Therefore it can realize many more production orders together than each of them separately. Such organization allows for executing high quality product. The maintenance costs of production capacity and used resources are not so high. In this paper a methodology of rapid prototyping of virtual production networks is proposed. It allows to execute production orders on time considered existing logistic constraints.

  16. Modeling Epidemics Spreading on Social Contact Networks.

    Science.gov (United States)

    Zhang, Zhaoyang; Wang, Honggang; Wang, Chonggang; Fang, Hua

    2015-09-01

    Social contact networks and the way people interact with each other are the key factors that impact on epidemics spreading. However, it is challenging to model the behavior of epidemics based on social contact networks due to their high dynamics. Traditional models such as susceptible-infected-recovered (SIR) model ignore the crowding or protection effect and thus has some unrealistic assumption. In this paper, we consider the crowding or protection effect and develop a novel model called improved SIR model. Then, we use both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks. The results from both simulations and real data set conclude that the epidemics are more likely to outbreak on social contact networks with higher average degree. We also present some potential immunization strategies, such as random set immunization, dominating set immunization, and high degree set immunization to further prove the conclusion.

  17. Random graph models for dynamic networks

    Science.gov (United States)

    Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.

    2017-10-01

    Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.

  18. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  19. Modeling the interdependent network based on two-mode networks

    Science.gov (United States)

    An, Feng; Gao, Xiangyun; Guan, Jianhe; Huang, Shupei; Liu, Qian

    2017-10-01

    Among heterogeneous networks, there exist obviously and closely interdependent linkages. Unlike existing research primarily focus on the theoretical research of physical interdependent network model. We propose a two-layer interdependent network model based on two-mode networks to explore the interdependent features in the reality. Specifically, we construct a two-layer interdependent loan network and develop several dependent features indices. The model is verified to enable us to capture the loan dependent features of listed companies based on loan behaviors and shared shareholders. Taking Chinese debit and credit market as case study, the main conclusions are: (1) only few listed companies shoulder the main capital transmission (20% listed companies occupy almost 70% dependent degree). (2) The control of these key listed companies will be more effective of avoiding the spreading of financial risks. (3) Identifying the companies with high betweenness centrality and controlling them could be helpful to monitor the financial risk spreading. (4) The capital transmission channel among Chinese financial listed companies and Chinese non-financial listed companies are relatively strong. However, under greater pressure of demand of capital transmission (70% edges failed), the transmission channel, which constructed by debit and credit behavior, will eventually collapse.

  20. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  1. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks

    Directory of Open Access Journals (Sweden)

    Nazir Ahmad Zafar

    2012-08-01

    Full Text Available Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs, as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical. The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  2. An endogenous model of the credit network

    Science.gov (United States)

    He, Jianmin; Sui, Xin; Li, Shouwei

    2016-01-01

    In this paper, an endogenous credit network model of firm-bank agents is constructed. The model describes the endogenous formation of firm-firm, firm-bank and bank-bank credit relationships. By means of simulations, the model is capable of showing some obvious similarities with empirical evidence found by other scholars: the upper-tail of firm size distribution can be well fitted with a power-law; the bank size distribution can be lognormally distributed with a power-law tail; the bank in-degrees of the interbank credit network as well as the firm-bank credit network fall into two-power-law distributions.

  3. Tensor network models of multiboundary wormholes

    Science.gov (United States)

    Peach, Alex; Ross, Simon F.

    2017-05-01

    We study the entanglement structure of states dual to multiboundary wormhole geometries using tensor network models. Perfect and random tensor networks tiling the hyperbolic plane have been shown to provide good models of the entanglement structure in holography. We extend this by quotienting the plane by discrete isometries to obtain models of the multiboundary states. We show that there are networks where the entanglement structure is purely bipartite, extending results obtained in the large temperature limit. We analyse the entanglement structure in a range of examples.

  4. Stochastic discrete model of karstic networks

    Science.gov (United States)

    Jaquet, O.; Siegel, P.; Klubertanz, G.; Benabderrhamane, H.

    Karst aquifers are characterised by an extreme spatial heterogeneity that strongly influences their hydraulic behaviour and the transport of pollutants. These aquifers are particularly vulnerable to contamination because of their highly permeable networks of conduits. A stochastic model is proposed for the simulation of the geometry of karstic networks at a regional scale. The model integrates the relevant physical processes governing the formation of karstic networks. The discrete simulation of karstic networks is performed with a modified lattice-gas cellular automaton for a representative description of the karstic aquifer geometry. Consequently, more reliable modelling results can be obtained for the management and the protection of karst aquifers. The stochastic model was applied jointly with groundwater modelling techniques to a regional karst aquifer in France for the purpose of resolving surface pollution issues.

  5. Designing Network-based Business Model Ontology

    DEFF Research Database (Denmark)

    Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz

    2015-01-01

    Survival on dynamic environment is not achieved without a map. Scanning and monitoring of the market show business models as a fruitful tool. But scholars believe that old-fashioned business models are dead; as they are not included the effect of internet and network in themselves. This paper...... is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....

  6. Queueing Models for Mobile Ad Hoc Networks

    NARCIS (Netherlands)

    de Haan, Roland

    2009-01-01

    This thesis presents models for the performance analysis of a recent communication paradigm: \\emph{mobile ad hoc networking}. The objective of mobile ad hoc networking is to provide wireless connectivity between stations in a highly dynamic environment. These dynamics are driven by the mobility of

  7. Modelling traffic congestion using queuing networks

    Indian Academy of Sciences (India)

    Traffic Flow-Density diagrams are obtained using simple Jackson queuing network analysis. Such simple analytical models can be used to capture the effect of non- homogenous traffic. Keywords. Flow-density curves; uninterrupted traffic; Jackson networks. 1. Introduction. Traffic management has become very essential in ...

  8. Empirical data validation for model building

    Science.gov (United States)

    Kazarian, Aram

    2008-03-01

    Optical Proximity Correction (OPC) has become an integral and critical part of process development for advanced technologies with challenging k I requirements. OPC solutions in turn require stable, predictive models to be built that can project the behavior of all structures. These structures must comprehend all geometries that can occur in the layout in order to define the optimal corrections by feature, and thus enable a manufacturing process with acceptable margin. The model is built upon two main component blocks. First, is knowledge of the process conditions which includes the optical parameters (e.g. illumination source, wavelength, lens characteristics, etc) as well as mask definition, resist parameters and process film stack information. Second, is the empirical critical dimension (CD) data collected using this process on specific test features the results of which are used to fit and validate the model and to project resist contours for all allowable feature layouts. The quality of the model therefore is highly dependent on the integrity of the process data collected for this purpose. Since the test pattern suite generally extends to below the resolution limit that the process can support with adequate latitude, the CD measurements collected can often be quite noisy with marginal signal-to-noise ratios. In order for the model to be reliable and a best representation of the process behavior, it is necessary to scrutinize empirical data to ensure that it is not dominated by measurement noise or flyer/outlier points. The primary approach for generating a clean, smooth and dependable empirical data set should be a replicated measurement sampling that can help to statistically reduce measurement noise by averaging. However, it can often be impractical to collect the amount of data needed to ensure a clean data set by this method. An alternate approach is studied in this paper to further smooth the measured data by means of curve fitting to identify remaining

  9. Lipid Processing Technology: Building a Multilevel Modeling Network

    DEFF Research Database (Denmark)

    Díaz Tovar, Carlos Axel; Mustaffa, Azizul Azri; Mukkerikar, Amol

    2011-01-01

    in design and analysis of unit operations; iv) the information and models developed are used as building blocks in the development of methods and tools for computer-aided synthesis and design of process flowsheets (CAFD). The applicability of this methodology is highlighted in each level of modeling through......The aim of this work is to present the development of a computer aided multilevel modeling network for the systematic design and analysis of processes employing lipid technologies. This is achieved by decomposing the problem into four levels of modeling: i) pure component property modeling...... and a lipid-database of collected experimental data from industry and generated data from validated predictive property models, as well as modeling tools for fast adoption-analysis of property prediction models; ii) modeling of phase behavior of relevant lipid mixtures using the UNIFACCI model, development...

  10. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  11. Validation of a Global Hydrodynamic Flood Inundation Model

    Science.gov (United States)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  12. Mathematical model of highways network optimization

    Science.gov (United States)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  13. Modeling trust context in networks

    CERN Document Server

    Adali, Sibel

    2013-01-01

    We make complex decisions every day, requiring trust in many different entities for different reasons. These decisions are not made by combining many isolated trust evaluations. Many interlocking factors play a role, each dynamically impacting the others.? In this brief, 'trust context' is defined as the system level description of how the trust evaluation process unfolds.Networks today are part of almost all human activity, supporting and shaping it. Applications increasingly incorporate new interdependencies and new trust contexts. Social networks connect people and organizations throughout

  14. Model-based control of networked systems

    CERN Document Server

    Garcia, Eloy; Montestruque, Luis A

    2014-01-01

    This monograph introduces a class of networked control systems (NCS) called model-based networked control systems (MB-NCS) and presents various architectures and control strategies designed to improve the performance of NCS. The overall performance of NCS considers the appropriate use of network resources, particularly network bandwidth, in conjunction with the desired response of the system being controlled.   The book begins with a detailed description of the basic MB-NCS architecture that provides stability conditions in terms of state feedback updates . It also covers typical problems in NCS such as network delays, network scheduling, and data quantization, as well as more general control problems such as output feedback control, nonlinear systems stabilization, and tracking control.   Key features and topics include: Time-triggered and event-triggered feedback updates Stabilization of uncertain systems subject to time delays, quantization, and extended absence of feedback Optimal control analysis and ...

  15. Complex networks repair strategies: Dynamic models

    Science.gov (United States)

    Fu, Chaoqi; Wang, Ying; Gao, Yangjun; Wang, Xiaoyang

    2017-09-01

    Network repair strategies are tactical methods that restore the efficiency of damaged networks; however, unreasonable repair strategies not only waste resources, they are also ineffective for network recovery. Most extant research on network repair focuses on static networks, but results and findings on static networks cannot be applied to evolutionary dynamic networks because, in dynamic models, complex network repair has completely different characteristics. For instance, repaired nodes face more severe challenges, and require strategic repair methods in order to have a significant effect. In this study, we propose the Shell Repair Strategy (SRS) to minimize the risk of secondary node failures due to the cascading effect. Our proposed method includes the identification of a set of vital nodes that have a significant impact on network repair and defense. Our identification of these vital nodes reduces the number of switching nodes that face the risk of secondary failures during the dynamic repair process. This is positively correlated with the size of the average degree 〈 k 〉 and enhances network invulnerability.

  16. The Pseudo-Self-Similar Traffic Model: Application and Validation

    NARCIS (Netherlands)

    El Abdouni Khayari, Rachid; Haverkort, Boudewijn R.H.M.; Sadre, R.; Ost, Alexander

    2004-01-01

    Since the early 1990s, a variety of studies have shown that network traffic, both for local- and wide-area networks, has self-similar properties. This led to new approaches in network traffic modelling because most traditional traffic approaches result in the underestimation of performance measures

  17. Modeling Network Traffic in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Sheng Ma

    2004-12-01

    Full Text Available This work discovers that although network traffic has the complicated short- and long-range temporal dependence, the corresponding wavelet coefficients are no longer long-range dependent. Therefore, a "short-range" dependent process can be used to model network traffic in the wavelet domain. Both independent and Markov models are investigated. Theoretical analysis shows that the independent wavelet model is sufficiently accurate in terms of the buffer overflow probability for Fractional Gaussian Noise traffic. Any model, which captures additional correlations in the wavelet domain, only improves the performance marginally. The independent wavelet model is then used as a unified approach to model network traffic including VBR MPEG video and Ethernet data. The computational complexity is O(N for developing such wavelet models and generating synthesized traffic of length N, which is among the lowest attained.

  18. Gene Regulation Networks for Modeling Drosophila Development

    Science.gov (United States)

    Mjolsness, E.

    1999-01-01

    This chapter will very briefly introduce and review some computational experiments in using trainable gene regulation network models to simulate and understand selected episodes in the development of the fruit fly, Drosophila Melanogaster.

  19. Mitigating risk during strategic supply network modeling

    OpenAIRE

    Müssigmann, Nikolaus

    2006-01-01

    Mitigating risk during strategic supply network modeling. - In: Managing risks in supply chains / ed. by Wolfgang Kersten ... - Berlin : Schmidt, 2006. - S. 213-226. - (Operations and technology management ; 1)

  20. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  1. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...

  2. Mobi-Sim: An Emulation and Prototyping Platform for Protocols Validation of Mobile Wireless Sensors Networks

    Directory of Open Access Journals (Sweden)

    Omina Mezghani

    2017-01-01

    Full Text Available The objective of this paper is to provide a new simulator framework for mobile WSN that emulate a sensor node at a laptop i.e. the laptop will model and replace a sensor node within a network. This platform can implement different WSN routing protocols to simulate and validate new developed protocols in terms of energy consumption, loss packets rate, delivery ratio, mobility support, connectivity and exchanged messages number in real time. To evaluate the performance of Mobi-Sim, we implement into it two popular protocols (LEACH-M and LEACH sink-mobile and compare its results to TOSSIM. Then, we propose another routing protocol based on clustering that we compare it to LEACH-M.

  3. A Bayesian Network View on Nested Effects Models

    Directory of Open Access Journals (Sweden)

    Fröhlich Holger

    2009-01-01

    Full Text Available Nested effects models (NEMs are a class of probabilistic models that were designed to reconstruct a hidden signalling structure from a large set of observable effects caused by active interventions into the signalling pathway. We give a more flexible formulation of NEMs in the language of Bayesian networks. Our framework constitutes a natural generalization of the original NEM model, since it explicitly states the assumptions that are tacitly underlying the original version. Our approach gives rise to new learning methods for NEMs, which have been implemented in the /Bioconductor package nem. We validate these methods in a simulation study and apply them to a synthetic lethality dataset in yeast.

  4. Road maintenance planning using network flow modelling

    OpenAIRE

    Yang, Chao; Remenyte-Prescott, Rasa; Andrews, John

    2015-01-01

    This paper presents a road maintenance planning model that can be used to balance out maintenance cost and road user cost, since performing road maintenance at night can be convenient for road users but costly for highway agency. Based on the platform of the network traffic flow modelling, the traffic through the worksite and its adjacent road links is evaluated. Thus, maintenance arrangements at a worksite can be optimized considering the overall network performance. In addition, genetic alg...

  5. Distrubtion Tolerant Network Technology Flight Validation Report: DINET

    Science.gov (United States)

    Jones, Ross M.

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.

  6. Distribution Tolerant Network Technology Flight Validation Report: DINET

    Science.gov (United States)

    Jones, Ross M.

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.

  7. Validation of artificial neural networks as a methodology for donor-recipient matching for liver transplantation.

    Science.gov (United States)

    Ayllón, María Dolores; Ciria, Rubén; Cruz-Ramírez, Manuel; Pérez-Ortiz, María; Valente, Roberto; O'Grady, John; de la Mata, Manuel; Hervás-Martínez, César; Heaton, Nigel D; Briceño, Javier

    2017-09-16

    In 2014, we reported a model for Donor-Recipient matching (D-R) in liver transplantation (LT) based on artificial-neural-networks (ANN) from a Spanish multicentre study (MADR-E: Model for Allocation of Donor and Recipient in España). The aim is to test the ANN-based methodology in a different European-healthcare system in order to validate it. An ANN model was designed using a cohort of patients from King's College Hospital (KCH) (N=822). The ANN was trained and tested using KCH pairs for both 3- and 12-months survival models. Endpoints were probability of graft survival (CCR) and non-survival (MS). The final model is a rule-based-system for facilitating the decision about the most appropriate D-R matching. Models designed for KCH had excellent prediction capabilities for both 3-months (CCR-AUC=0.94; MS-AUC=0.94) and 12-months (CCR-AUC=0.78; MS-AUC=0.82), almost 15% higher than the best obtained by other known scores such as MELD and BAR. Moreover, these results improve the previously reported ones in the multicentric MADR-E database. The use of ANN for D-R matching in LT in other healthcare systems achieved excellent prediction capabilities supporting the validation of these tools. It should be considered as the most advanced, objective and useful tool to date for the management of waiting lists. This article is protected by copyright. All rights reserved. © 2017 by the American Association for the Study of Liver Diseases.

  8. Validation of the measure automobile emissions model : a statistical analysis

    Science.gov (United States)

    2000-09-01

    The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized e...

  9. Dental models made with an intraoral scanner: A validation study.

    NARCIS (Netherlands)

    Cuperus, A.M.; Harms, M.C.; Rangel, F.A.; Bronkhorst, E.M.; Schols, J.G.J.H.; Breuning, K.H.

    2012-01-01

    INTRODUCTION: Our objectives were to determine the validity and reproducibility of measurements on stereolithographic models and 3-dimensional digital dental models made with an intraoral scanner. METHODS: Ten dry human skulls were scanned; from the scans, stereolithographic models and digital

  10. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  11. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  12. Design and Development Research: A Model Validation Case

    Science.gov (United States)

    Tracey, Monica W.

    2009-01-01

    This is a report of one case of a design and development research study that aimed to validate an overlay instructional design model incorporating the theory of multiple intelligences into instructional systems design. After design and expert review model validation, The Multiple Intelligence (MI) Design Model, used with an Instructional Systems…

  13. A simple model for studying interacting networks

    Science.gov (United States)

    Liu, Wenjia; Jolad, Shivakumar; Schmittmann, Beate; Zia, R. K. P.

    2011-03-01

    Many specific physical networks (e.g., internet, power grid, interstates), have been characterized in considerable detail, but in isolation from each other. Yet, each of these networks supports the functions of the others, and so far, little is known about how their interactions affect their structure and functionality. To address this issue, we consider two coupled model networks. Each network is relatively simple, with a fixed set of nodes, but dynamically generated set of links which has a preferred degree, κ . In the stationary state, the degree distribution has exponential tails (far from κ), an attribute which we can explain. Next, we consider two such networks with different κ 's, reminiscent of two social groups, e.g., extroverts and introverts. Finally, we let these networks interact by establishing a controllable fraction of cross links. The resulting distribution of links, both within and across the two model networks, is investigated and discussed, along with some potential consequences for real networks. Supported in part by NSF-DMR-0705152 and 1005417.

  14. Modeling gene regulatory network motifs using Statecharts.

    Science.gov (United States)

    Fioravanti, Fabio; Helmer-Citterich, Manuela; Nardelli, Enrico

    2012-03-28

    Gene regulatory networks are widely used by biologists to describe the interactions among genes, proteins and other components at the intra-cellular level. Recently, a great effort has been devoted to give gene regulatory networks a formal semantics based on existing computational frameworks.For this purpose, we consider Statecharts, which are a modular, hierarchical and executable formal model widely used to represent software systems. We use Statecharts for modeling small and recurring patterns of interactions in gene regulatory networks, called motifs. We present an improved method for modeling gene regulatory network motifs using Statecharts and we describe the successful modeling of several motifs, including those which could not be modeled or whose models could not be distinguished using the method of a previous proposal.We model motifs in an easy and intuitive way by taking advantage of the visual features of Statecharts. Our modeling approach is able to simulate some interesting temporal properties of gene regulatory network motifs: the delay in the activation and the deactivation of the "output" gene in the coherent type-1 feedforward loop, the pulse in the incoherent type-1 feedforward loop, the bistability nature of double positive and double negative feedback loops, the oscillatory behavior of the negative feedback loop, and the "lock-in" effect of positive autoregulation. We present a Statecharts-based approach for the modeling of gene regulatory network motifs in biological systems. The basic motifs used to build more complex networks (that is, simple regulation, reciprocal regulation, feedback loop, feedforward loop, and autoregulation) can be faithfully described and their temporal dynamics can be analyzed.

  15. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  16. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  17. Telestroke network business model strategies.

    Science.gov (United States)

    Fanale, Christopher V; Demaerschalk, Bart M

    2012-10-01

    Our objective is to summarize the evidence that supports the reliability of telemedicine for diagnosis and efficacy in acute stroke treatment, identify strategies for funding the development of a telestroke network, and to present issues with respect to economic sustainability, cost effectiveness, and the status of reimbursement for telestroke. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  18. Modal testing for model validation of structures with discrete nonlinearities

    National Research Council Canada - National Science Library

    Ewins, D J; Weekes, B; delli Carri, A

    2015-01-01

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement...

  19. Complex networks under dynamic repair model

    Science.gov (United States)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  20. Markov State Models of gene regulatory networks.

    Science.gov (United States)

    Chu, Brian K; Tse, Margaret J; Sato, Royce R; Read, Elizabeth L

    2017-02-06

    Gene regulatory networks with dynamics characterized by multiple stable states underlie cell fate-decisions. Quantitative models that can link molecular-level knowledge of gene regulation to a global understanding of network dynamics have the potential to guide cell-reprogramming strategies. Networks are often modeled by the stochastic Chemical Master Equation, but methods for systematic identification of key properties of the global dynamics are currently lacking. The method identifies the number, phenotypes, and lifetimes of long-lived states for a set of common gene regulatory network models. Application of transition path theory to the constructed Markov State Model decomposes global dynamics into a set of dominant transition paths and associated relative probabilities for stochastic state-switching. In this proof-of-concept study, we found that the Markov State Model provides a general framework for analyzing and visualizing stochastic multistability and state-transitions in gene networks. Our results suggest that this framework-adopted from the field of atomistic Molecular Dynamics-can be a useful tool for quantitative Systems Biology at the network scale.

  1. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  2. Validation of OMI NO2 Data to Enhance EPA Ground Network Data: An RPC Experiment

    Science.gov (United States)

    Kleb, M. M.; Pippin, M. R.; Parker, P. A.; Rhew, R. D.; Szykman, J. J.; Neil, D. O.

    2007-12-01

    We present an RPC validation study to determine the potential use of OMI tropospheric NO2 column data to enhance spatial surface predictions of NO2 as an augmentation to the continuous NO2 ground network data collected by the State and Local Air Monitoring Stations (SLAMS) and National Air Monitoring Stations (NAMS) for the continental United States. Using one year of OMI and SLAMS/NAMS ground based data from the EPA's Air Quality System (AQS), NO2 values are compared using a variety of statistical techniques including a time series analysis at each EPA ground station in the continental United States, a site-by- site correlation analysis, site-by-site comparison of mean and standard deviation values, and regional (defined by the ten EPA regions) spatial statistics. In addition, a multivariate statistical prediction model with significance testing is developed to determine within a 95% confidence level the impact of concentration, latitude, region, season, environment (urban vs. rural), and pixel size on the correlation of OMI to EPA NO2 data. The robustness of the statistical model is evaluated using statistical methods. Results of this experiment quantify the ability to use OMI-derived NO2 observations to provide predicted surface concentrations to augment the coverage of the existing NO2 ground networks in regions of sparse or non-existent ground monitors. This predictive capability could facilitate a more capable and integrated observing network for NO2 and lead to more informed air quality management decisions at the local, state, and national level.

  3. Applications of spatial statistical network models to stream data

    Science.gov (United States)

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  4. Modeling acquaintance networks based on balance theory

    Directory of Open Access Journals (Sweden)

    Vukašinović Vida

    2014-09-01

    Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models

  5. Specification and Validation of an Edge Router Discovery Protocol for Mobile Ad Hoc Networks

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Jensen, Kurt

    2004-01-01

    core network in assigning network address prefixes to gateways in mobile ad-hoc networks. This paper focuses on how CP-nets and the CPN computer tools have been applied in the development of ERDP. A CPN model has been constructed that constitutes a formal executable specification of ERDP. Simulation...

  6. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  7. Optimal transportation networks models and theory

    CERN Document Server

    Bernot, Marc; Morel, Jean-Michel

    2009-01-01

    The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.

  8. Use of EARLINET climatology for validation of vertical model profiles

    Science.gov (United States)

    Mortier, Augustin; Schulz, Michael

    2017-04-01

    For over a decade, intensive in-situ, ground-based and spaceborne remote observations are dedicated to the aerosols, a major component of the Earth atmosphere. These observations are mostly motivated by the high variability of the particles in space and time and their effect on the climate at a global scale, and at a regional scale on air quality. In the meantime, global and regional models provide aerosol concentrations (as projection, reanalysis or in near real time in chemical weather forecasting) respectively for the calculation of radiative effects and the assessment of air quality. The vertical distribution of the aerosol is a key-parameter since it affects its lifetime and reflects physical processes such as wet and dry deposition or chemical reactions. The aerosols present in low levels of the troposphere directly affect local air quality, while elevated aerosol layers can be transported long-range and contribute to pollution in remote regions. The evaluation of aerosol column and simulated vertical profiles are thus of particular interest for the performance characterisation of air quality models. The Copernicus Atmosphere Monitoring System (CAMS) delivers daily near real time aerosols products over Europe. In the framework of producing a regional a posteriori validation of the CAMS models, we propose, through this study, a validation exercise of the vertical aerosol profiles. This shall rely on the ACTRIS European Aerosol Research Lidar Network (EARLINET) measurements because of their quality and the opportunity to derive a climatology from long-term measurements. PM10 profiles are given from the models while mostly backscatter profiles are available from EARLINET database. After studying the representativeness of the EARLINET data (2006-2014), we present a comparison with the modeled vertical profiles (7 models and the Ensemble) at the location of measurement stations for the different seasons of the year 2016. The challenge of comparing the measured

  9. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Young-Jae Song

    2009-07-01

    Full Text Available Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  10. Intrusion-aware alert validation algorithm for cooperative distributed intrusion detection schemes of wireless sensor networks.

    Science.gov (United States)

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm.

  11. A Transfer Learning Approach for Network Modeling

    Science.gov (United States)

    Huang, Shuai; Li, Jing; Chen, Kewei; Wu, Teresa; Ye, Jieping; Wu, Xia; Yao, Li

    2012-01-01

    Networks models have been widely used in many domains to characterize the interacting relationship between physical entities. A typical problem faced is to identify the networks of multiple related tasks that share some similarities. In this case, a transfer learning approach that can leverage the knowledge gained during the modeling of one task to help better model another task is highly desirable. In this paper, we propose a transfer learning approach, which adopts a Bayesian hierarchical model framework to characterize task relatedness and additionally uses the L1-regularization to ensure robust learning of the networks with limited sample sizes. A method based on the Expectation-Maximization (EM) algorithm is further developed to learn the networks from data. Simulation studies are performed, which demonstrate the superiority of the proposed transfer learning approach over single task learning that learns the network of each task in isolation. The proposed approach is also applied to identification of brain connectivity networks of Alzheimer’s disease (AD) from functional magnetic resonance image (fMRI) data. The findings are consistent with the AD literature. PMID:24526804

  12. Validation of 2D flood models with insurance claims

    Science.gov (United States)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  13. Modelling complex networks by random hierarchical graphs

    Directory of Open Access Journals (Sweden)

    M.Wróbel

    2008-06-01

    Full Text Available Numerous complex networks contain special patterns, called network motifs. These are specific subgraphs, which occur oftener than in randomized networks of Erdős-Rényi type. We choose one of them, the triangle, and build a family of random hierarchical graphs, being Sierpiński gasket-based graphs with random "decorations". We calculate the important characteristics of these graphs - average degree, average shortest path length, small-world graph family characteristics. They depend on probability of decorations. We analyze the Ising model on our graphs and describe its critical properties using a renormalization-group technique.

  14. A Network Model of Credit Risk Contagion

    Directory of Open Access Journals (Sweden)

    Ting-Qiang Chen

    2012-01-01

    Full Text Available A network model of credit risk contagion is presented, in which the effect of behaviors of credit risk holders and the financial market regulators and the network structure are considered. By introducing the stochastic dominance theory, we discussed, respectively, the effect mechanisms of the degree of individual relationship, individual attitude to credit risk contagion, the individual ability to resist credit risk contagion, the monitoring strength of the financial market regulators, and the network structure on credit risk contagion. Then some derived and proofed propositions were verified through numerical simulations.

  15. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  16. Mechanical modeling of interpenetrating polymer network reinforced acrylic elastomer

    Science.gov (United States)

    Schmidt, Arne; Bergamini, Andrea; Kovacs, Gabor; Mazza, Edoardo

    2010-04-01

    Interpenetrating polymer network reinforced acrylic elastomers (IPN) offer outstanding performance in free-standing contractile dielectric elastomer actuators. This work presents the verification of a recently proposed material model for a VHB 4910 based IPN [1]. The 3D large strain material model was determined from extensive data of multiaxial mechanical experiments and allows to account for the variations in material composition of IPN-membranes. We employed inflation tests to membranes of different material composition to study the materials response in a stress state different from the one that was used to extract the material parameters. By applying the material model to finite element models we successfully validated the material model in a range of material compositions typically used for dielectric elastomer actuator applications. In combination with a characterization of electro-mechanical coupling, this 3D large strain model can be used to model IPN-based dielectric elastomer actuators.

  17. Deep space network software cost estimation model

    Science.gov (United States)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  18. Continuum Modeling of Biological Network Formation

    KAUST Repository

    Albi, Giacomo

    2017-04-10

    We present an overview of recent analytical and numerical results for the elliptic–parabolic system of partial differential equations proposed by Hu and Cai, which models the formation of biological transportation networks. The model describes the pressure field using a Darcy type equation and the dynamics of the conductance network under pressure force effects. Randomness in the material structure is represented by a linear diffusion term and conductance relaxation by an algebraic decay term. We first introduce micro- and mesoscopic models and show how they are connected to the macroscopic PDE system. Then, we provide an overview of analytical results for the PDE model, focusing mainly on the existence of weak and mild solutions and analysis of the steady states. The analytical part is complemented by extensive numerical simulations. We propose a discretization based on finite elements and study the qualitative properties of network structures for various parameter values.

  19. Stochastic modeling and analysis of telecoms networks

    CERN Document Server

    Decreusefond, Laurent

    2012-01-01

    This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an

  20. Neural networks as models of psychopathology.

    Science.gov (United States)

    Aakerlund, L; Hemmingsen, R

    1998-04-01

    Neural network modeling is situated between neurobiology, cognitive science, and neuropsychology. The structural and functional resemblance with biological computation has made artificial neural networks (ANN) useful for exploring the relationship between neurobiology and computational performance, i.e., cognition and behavior. This review provides an introduction to the theory of ANN and how they have linked theories from neurobiology and psychopathology in schizophrenia, affective disorders, and dementia.

  1. Decomposed Implicit Models of Piecewise - Linear Networks

    Directory of Open Access Journals (Sweden)

    J. Brzobohaty

    1992-05-01

    Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.

  2. Validity maintenance in semantic feature modeling

    NARCIS (Netherlands)

    Bidarra de Almeida, A.R.E.

    1999-01-01

    Feature modeling has the ability to associate functional and engineering information to shape information in a product model. Current feature modeling systems, however, are still rather tied to techniques of conventional geometric modeling systems, offering limited facilities for defining feature

  3. Prospects and problems for standardizing model validation in systems biology.

    Science.gov (United States)

    Gross, Fridolin; MacLeod, Miles

    2017-10-01

    There are currently no widely shared criteria by which to assess the validity of computational models in systems biology. Here we discuss the feasibility and desirability of implementing validation standards for modeling. Having such a standard would facilitate journal review, interdisciplinary collaboration, model exchange, and be especially relevant for applications close to medical practice. However, even though the production of predictively valid models is considered a central goal, in practice modeling in systems biology employs a variety of model structures and model-building practices. These serve a variety of purposes, many of which are heuristic and do not seem to require strict validation criteria and may even be restricted by them. Moreover, given the current situation in systems biology, implementing a validation standard would face serious technical obstacles mostly due to the quality of available empirical data. We advocate a cautious approach to standardization. However even though rigorous standardization seems premature at this point, raising the issue helps us develop better insights into the practices of systems biology and the technical problems modelers face validating models. Further it allows us to identify certain technical validation issues which hold regardless of modeling context and purpose. Informal guidelines could in fact play a role in the field by helping modelers handle these. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Green Network Planning Model for Optical Backbones

    DEFF Research Database (Denmark)

    Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir; Jensen, Michael

    2010-01-01

    on the environment in general. In network planning there are existing planning models focused on QoS provisioning, investment minimization or combinations of both and other parameters. But there is a lack of a model for designing green optical backbones. This paper presents novel ideas to be able to define...

  5. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels...

  6. Statistically validated mobile communication networks: the evolution of motifs in European and Chinese data

    Science.gov (United States)

    Li, Ming-Xia; Palchykov, Vasyl; Jiang, Zhi-Qiang; Kaski, Kimmo; Kertész, János; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N.

    2014-08-01

    Big data open up unprecedented opportunities for investigating complex systems, including society. In particular, communication data serve as major sources for computational social sciences, but they have to be cleaned and filtered as they may contain spurious information due to recording errors as well as interactions, like commercial and marketing activities, not directly related to the social network. The network constructed from communication data can only be considered as a proxy for the network of social relationships. Here we apply a systematic method, based on multiple-hypothesis testing, to statistically validate the links and then construct the corresponding Bonferroni network, generalized to the directed case. We study two large datasets of mobile phone records, one from Europe and the other from China. For both datasets we compare the raw data networks with the corresponding Bonferroni networks and point out significant differences in the structures and in the basic network measures. We show evidence that the Bonferroni network provides a better proxy for the network of social interactions than the original one. Using the filtered networks, we investigated the statistics and temporal evolution of small directed 3-motifs and concluded that closed communication triads have a formation time scale, which is quite fast and typically intraday. We also find that open communication triads preferentially evolve into other open triads with a higher fraction of reciprocated calls. These stylized facts were observed for both datasets.

  7. Mashup Model and Verification Using Mashup Processing Network

    Science.gov (United States)

    Zahoor, Ehtesham; Perrin, Olivier; Godart, Claude

    Mashups are defined to be lightweight Web applications aggregating data from different Web services, built using ad-hoc composition and being not concerned with long term stability and robustness. In this paper we present a pattern based approach, called Mashup Processing Network (MPN). The idea is based on Event Processing Network and is supposed to facilitate the creation, modeling and the verification of mashups. MPN provides a view of how different actors interact for the mashup development namely the producer, consumer, mashup processing agent and the communication channels. It also supports modeling transformations and validations of data and offers validation of both functional and non-functional requirements, such as reliable messaging and security, that are key issues within the enterprise context. We have enriched the model with a set of processing operations and categorize them into data composition, transformation and validation categories. These processing operations can be seen as a set of patterns for facilitating the mashup development process. MPN also paves a way for realizing Mashup Oriented Architecture where mashups along with services are used as building blocks for application development.

  8. Network condition simulator for benchmarking sewer deterioration models.

    Science.gov (United States)

    Scheidegger, A; Hug, T; Rieckermann, J; Maurer, M

    2011-10-15

    An accurate description of aging and deterioration of urban drainage systems is necessary for optimal investment and rehabilitation planning. Due to a general lack of suitable datasets, network condition models are rarely validated, and if so with varying levels of success. We therefore propose a novel network condition simulator (NetCoS) that produces a synthetic population of sewer sections with a given condition-class distribution. NetCoS can be used to benchmark deterioration models and guide utilities in the selection of appropriate models and data management strategies. The underlying probabilistic model considers three main processes: a) deterioration, b) replacement policy, and c) expansions of the sewer network. The deterioration model features a semi-Markov chain that uses transition probabilities based on user-defined survival functions. The replacement policy is approximated with a condition-class dependent probability of replacing a sewer pipe. The model then simulates the course of the sewer sections from the installation of the first line to the present, adding new pipes based on the defined replacement and expansion program. We demonstrate the usefulness of NetCoS in two examples where we quantify the influence of incomplete data and inspection frequency on the parameter estimation of a cohort survival model and a Markov deterioration model. Our results show that typical available sewer inventory data with discarded historical data overestimate the average life expectancy by up to 200 years. Although NetCoS cannot prove the validity of a particular deterioration model, it is useful to reveal its possible limitations and shortcomings and quantifies the effects of missing or uncertain data. Future developments should include additional processes, for example to investigate the long-term effect of pipe rehabilitation measures, such as inliners. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Towards a methodology for validation of centrality measures in complex networks.

    Directory of Open Access Journals (Sweden)

    Komal Batool

    Full Text Available BACKGROUND: Living systems are associated with Social networks - networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as "centralities" have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important? PURPOSE: The goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets. METHOD: We take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes. RESULTS: Our empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify

  10. Towards a methodology for validation of centrality measures in complex networks.

    Science.gov (United States)

    Batool, Komal; Niazi, Muaz A

    2014-01-01

    Living systems are associated with Social networks - networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as "centralities" have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important? The goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets. We take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes. Our empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify important nodes.

  11. Phenomenological network models: Lessons for epilepsy surgery.

    Science.gov (United States)

    Hebbink, Jurgen; Meijer, Hil; Huiskamp, Geertjan; van Gils, Stephan; Leijten, Frans

    2017-10-01

    The current opinion in epilepsy surgery is that successful surgery is about removing pathological cortex in the anatomic sense. This contrasts with recent developments in epilepsy research, where epilepsy is seen as a network disease. Computational models offer a framework to investigate the influence of networks, as well as local tissue properties, and to explore alternative resection strategies. Here we study, using such a model, the influence of connections on seizures and how this might change our traditional views of epilepsy surgery. We use a simple network model consisting of four interconnected neuronal populations. One of these populations can be made hyperexcitable, modeling a pathological region of cortex. Using model simulations, the effect of surgery on the seizure rate is studied. We find that removal of the hyperexcitable population is, in most cases, not the best approach to reduce the seizure rate. Removal of normal populations located at a crucial spot in the network, the "driver," is typically more effective in reducing seizure rate. This work strengthens the idea that network structure and connections may be more important than localizing the pathological node. This can explain why lesionectomy may not always be sufficient. © 2017 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.

  12. Importance of demand modelling in network water quality models: a review

    Directory of Open Access Journals (Sweden)

    J. C. van Dijk

    2008-09-01

    Full Text Available Today, there is a growing interest in network water quality modelling. The water quality issues of interest relate to both dissolved and particulate substances. For dissolved substances the main interest is in residual chlorine and (microbiological contaminant propagation; for particulate substances it is in sediment leading to discolouration. There is a strong influence of flows and velocities on transport, mixing, production and decay of these substances in the network. This imposes a different approach to demand modelling which is reviewed in this article.

    For the large diameter lines that comprise the transport portion of a typical municipal pipe system, a skeletonised network model with a top-down approach of demand pattern allocation, a hydraulic time step of 1 h, and a pure advection-reaction water quality model will usually suffice. For the smaller diameter lines that comprise the distribution portion of a municipal pipe system, an all-pipes network model with a bottom-up approach of demand pattern allocation, a hydraulic time step of 1 min or less, and a water quality model that considers dispersion and transients may be needed.

    Demand models that provide stochastic residential demands per individual home and on a one-second time scale are available. A stochastic demands based network water quality model needs to be developed and validated with field measurements. Such a model will be probabilistic in nature and will offer a new perspective for assessing water quality in the drinking water distribution system.

  13. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  14. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  15. Models of network reliability analysis, combinatorics, and Monte Carlo

    CERN Document Server

    Gertsbakh, Ilya B

    2009-01-01

    Unique in its approach, Models of Network Reliability: Analysis, Combinatorics, and Monte Carlo provides a brief introduction to Monte Carlo methods along with a concise exposition of reliability theory ideas. From there, the text investigates a collection of principal network reliability models, such as terminal connectivity for networks with unreliable edges and/or nodes, network lifetime distribution in the process of its destruction, network stationary behavior for renewable components, importance measures of network elements, reliability gradient, and network optimal reliability synthesis

  16. A comprehensive Network Security Risk Model for process control networks.

    Science.gov (United States)

    Henry, Matthew H; Haimes, Yacov Y

    2009-02-01

    The risk of cyber attacks on process control networks (PCN) is receiving significant attention due to the potentially catastrophic extent to which PCN failures can damage the infrastructures and commodity flows that they support. Risk management addresses the coupled problems of (1) reducing the likelihood that cyber attacks would succeed in disrupting PCN operation and (2) reducing the severity of consequences in the event of PCN failure or manipulation. The Network Security Risk Model (NSRM) developed in this article provides a means of evaluating the efficacy of candidate risk management policies by modeling the baseline risk and assessing expectations of risk after the implementation of candidate measures. Where existing risk models fall short of providing adequate insight into the efficacy of candidate risk management policies due to shortcomings in their structure or formulation, the NSRM provides model structure and an associated modeling methodology that captures the relevant dynamics of cyber attacks on PCN for risk analysis. This article develops the NSRM in detail in the context of an illustrative example.

  17. Bridging groundwater models and decision support with a Bayesian network

    Science.gov (United States)

    Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert

    2013-01-01

    Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.

  18. Personalized Learning Network Teaching Model

    Science.gov (United States)

    Feng, Zhou

    Adaptive learning system on the salient features, expounded personalized learning is adaptive learning system adaptive to learners key to learning. From the perspective of design theory, put forward an adaptive learning system to learn design thinking individual model, and using data mining techniques, the initial establishment of personalized adaptive systems model of learning.

  19. The Development and Validation of the Social Networking Experiences Questionnaire: A Measure of Adolescent Cyberbullying and Its Impact.

    Science.gov (United States)

    Dredge, Rebecca; Gleeson, John; Garcia, Xochitl de la Piedad

    2015-01-01

    The measurement of cyberbullying has been marked by several inconsistencies that lead to difficulties in cross-study comparisons of the frequency of occurrence and the impact of cyberbullying. Consequently, the first aim of this study was to develop a measure of experience with and impact of cyberbullying victimization in social networking sites in adolescents. The second aim was to investigate the psychometric properties of a purpose-built measure (Social Networking Experiences Questionnaire [SNEQ]). Exploratory factor analysis on 253 adolescent social networking sites users produced a six-factor model of impact. However, one factor was removed because of low internal consistency. Cronbach's alpha was higher than .76 for the victimization and remaining five impact subscales. Furthermore, correlation coefficients for the Victimization scale and related dimensions showed good construct validity. The utility of the SNEQ for victim support personnel, research, and cyberbullying education/prevention programs is discussed.

  20. Models for Validation of Prior Learning (VPL)

    DEFF Research Database (Denmark)

    Ehlers, Søren

    would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....

  1. A Unified Access Model for Interconnecting Heterogeneous Wireless Networks

    Science.gov (United States)

    2015-05-01

    validation of the proposed network design for unified network access, and it lays the foundation for implementing a Software-Defined Networking ( SDN ...testing real-world applications. Most importantly, our simulation serves as a template for implementing a unified MAC layer network using SDN . SDN ...is a network program with a programmable, centralized control plane.4 SDN protocols can be used to mediate access between nodes of an HN. The method

  2. Model Microvascular Networks Can Have Many Equilibria.

    Science.gov (United States)

    Karst, Nathaniel J; Geddes, John B; Carr, Russell T

    2017-03-01

    We show that large microvascular networks with realistic topologies, geometries, boundary conditions, and constitutive laws can exhibit many steady-state flow configurations. This is in direct contrast to most previous studies which have assumed, implicitly or explicitly, that a given network can only possess one equilibrium state. While our techniques are general and can be applied to any network, we focus on two distinct network types that model human tissues: perturbed honeycomb networks and random networks generated from Voronoi diagrams. We demonstrate that the disparity between observed and predicted flow directions reported in previous studies might be attributable to the presence of multiple equilibria. We show that the pathway effect, in which hematocrit is steadily increased along a series of diverging junctions, has important implications for equilibrium discovery, and that our estimates of the number of equilibria supported by these networks are conservative. If a more complete description of the plasma skimming effect that captures red blood cell allocation at junctions with high feed hematocrit were to be obtained empirically, then the number of equilibria found by our approach would at worst remain the same and would in all likelihood increase significantly.

  3. Validation of the Social Networking Activity Intensity Scale among Junior Middle School Students in China

    Science.gov (United States)

    Li, Jibin; Lau, Joseph T. F.; Mo, Phoenix K. H.; Su, Xuefen; Wu, Anise M. S.; Tang, Jie; Qin, Zuguo

    2016-01-01

    Background Online social networking use has been integrated into adolescents’ daily life and the intensity of online social networking use may have important consequences on adolescents’ well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China. Methods A total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods. Results Two factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach’s alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, pnetworking, social networking addiction, Internet addiction, and characteristics related to social networking use. Conclusions The SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population. PMID:27798699

  4. Validation of the Social Networking Activity Intensity Scale among Junior Middle School Students in China.

    Science.gov (United States)

    Li, Jibin; Lau, Joseph T F; Mo, Phoenix K H; Su, Xuefen; Wu, Anise M S; Tang, Jie; Qin, Zuguo

    2016-01-01

    Online social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China. A total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods. Two factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, pnetworking, social networking addiction, Internet addiction, and characteristics related to social networking use. The SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.

  5. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  6. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  7. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  8. Neural-networks-based feedback linearization versus model predictive control of continuous alcoholic fermentation process

    Energy Technology Data Exchange (ETDEWEB)

    Mjalli, F.S.; Al-Asheh, S. [Chemical Engineering Department, Qatar University, Doha (Qatar)

    2005-10-01

    In this work advanced nonlinear neural networks based control system design algorithms are adopted to control a mechanistic model for an ethanol fermentation process. The process model equations for such systems are highly nonlinear. A neural network strategy has been implemented in this work for capturing the dynamics of the mechanistic model for the fermentation process. The neural network achieved has been validated against the mechanistic model. Two neural network based nonlinear control strategies have also been adopted using the model identified. The performance of the feedback linearization technique was compared to neural network model predictive control in terms of stability and set point tracking capabilities. Under servo conditions, the feedback linearization algorithm gave comparable tracking and stability. The feedback linearization controller achieved the control target faster than the model predictive one but with vigorous and sudden controller moves. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  9. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale.

    Science.gov (United States)

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D; Sinatra, Maria

    2017-06-01

    Aim Research into social networking addiction has greatly increased over the last decade. However, the number of validated instruments assessing addiction to social networking sites (SNSs) remains few, and none have been validated in the Italian language. Consequently, this study tested the psychometric properties of the Italian version of the Bergen Social Media Addiction Scale (BSMAS), as well as providing empirical data concerning the relationship between attachment styles and SNS addiction. Methods A total of 769 participants were recruited to this study. Confirmatory factor analysis (CFA) and multigroup analyses were applied to assess construct validity of the Italian version of the BSMAS. Reliability analyses comprised the average variance extracted, the standard error of measurement, and the factor determinacy coefficient. Results Indices obtained from the CFA showed the Italian version of the BSMAS to have an excellent fit of the model to the data, thus confirming the single-factor structure of the instrument. Measurement invariance was established at configural, metric, and strict invariances across age groups, and at configural and metric levels across gender groups. Internal consistency was supported by several indicators. In addition, the theoretical associations between SNS addiction and attachment styles were generally supported. Conclusion This study provides evidence that the Italian version of the BSMAS is a psychometrically robust tool that can be used in future Italian research into social networking addiction.

  10. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  11. PROJECT ACTIVITY ANALYSIS WITHOUT THE NETWORK MODEL

    Directory of Open Access Journals (Sweden)

    S. Munapo

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper presents a new procedure for analysing and managing activity sequences in projects. The new procedure determines critical activities, critical path, start times, free floats, crash limits, and other useful information without the use of the network model. Even though network models have been successfully used in project management so far, there are weaknesses associated with the use. A network is not easy to generate, and dummies that are usually associated with it make the network diagram complex – and dummy activities have no meaning in the original project management problem. The network model for projects can be avoided while still obtaining all the useful information that is required for project management. What are required are the activities, their accurate durations, and their predecessors.

    AFRIKAANSE OPSOMMING: Die navorsing beskryf ’n nuwerwetse metode vir die ontleding en bestuur van die sekwensiële aktiwiteite van projekte. Die voorgestelde metode bepaal kritiese aktiwiteite, die kritieke pad, aanvangstye, speling, verhasing, en ander groothede sonder die gebruik van ’n netwerkmodel. Die metode funksioneer bevredigend in die praktyk, en omseil die administratiewe rompslomp van die tradisionele netwerkmodelle.

  12. Algebraic Statistics for Network Models

    Science.gov (United States)

    2014-02-19

    use algebra, combinatorics and Markov bases to give a constructing way of answering this question for ERGMs of interest. Question 2: How do we model...for every function. 06/06/13 Petrović. Manuscripts 8, 10. Invited lecture at the Scientific Session on Commutative Algebra and Combinatorics at the

  13. Network Modeling and Simulation (NEMSE)

    Science.gov (United States)

    2013-07-01

    Prioritized Packet Fragmentation", IEEE Trans. Multimedia , Oct. 2012. [13 SYSENG] . Defense Acquisition Guidebook, Chapter 4 System Engineering, and...2012 IEEE High Performance Extreme Computing Conference (HPEC) poster session [1 Ross]. Motivation  Air Force Research Lab needs o Capability...is virtual. These eight virtualizations were: System-in-the-Loop (SITL) using OPNET Modeler, COPE, Field Programmable Gate Array ( FPGA Physical

  14. Security Modeling on the Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Marn-Ling Shing

    2007-10-01

    Full Text Available In order to keep the price down, a purchaser sends out the request for quotation to a group of suppliers in a supply chain network. The purchaser will then choose a supplier with the best combination of price and quality. A potential supplier will try to collect the related information about other suppliers so he/she can offer the best bid to the purchaser. Therefore, confidentiality becomes an important consideration for the design of a supply chain network. Chen et al. have proposed the application of the Bell-LaPadula model in the design of a secured supply chain network. In the Bell-LaPadula model, a subject can be in one of different security clearances and an object can be in one of various security classifications. All the possible combinations of (Security Clearance, Classification pair in the Bell-LaPadula model can be thought as different states in the Markov Chain model. This paper extends the work done by Chen et al., provides more details on the Markov Chain model and illustrates how to use it to monitor the security state transition in the supply chain network.

  15. An evolving model of online bipartite networks

    Science.gov (United States)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang

    2013-12-01

    Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.

  16. Malware Propagation and Prevention Model for Time-Varying Community Networks within Software Defined Networks

    OpenAIRE

    Lan Liu; Ryan K. L. Ko; Guangming Ren; Xiaoping Xu

    2017-01-01

    As the adoption of Software Defined Networks (SDNs) grows, the security of SDN still has several unaddressed limitations. A key network security research area is in the study of malware propagation across the SDN-enabled networks. To analyze the spreading processes of network malware (e.g., viruses) in SDN, we propose a dynamic model with a time-varying community network, inspired by research models on the spread of epidemics in complex networks across communities. We assume subnets of the ne...

  17. Development of Artificial Neural Network Model of Crude Oil Distillation Column

    Directory of Open Access Journals (Sweden)

    Ali Hussein Khalaf

    2016-02-01

    Full Text Available Artificial neural network in MATLAB simulator is used to model Baiji crude oil distillation unit based on data generated from aspen-HYSYS simulator. Thirteen inputs, six outputs and over 1487 data set are used to model the actual unit. Nonlinear autoregressive network with exogenous inputs (NARXand back propagation algorithm are used for training. Seventy percent of data are used for training the network while the remaining  thirty percent are used for testing  and validating the network to determine its prediction accuracy. One hidden layer and 34 hidden neurons are used for the proposed network with MSE of 0.25 is obtained. The number of neuron are selected based on less MSE for the network. The model founded to predict the optimal operating conditions for different objective functions within the training limit since ANN models are poor extrapolators. They are usually only reliable within the range of data that they had been trained for.

  18. Development of Artificial Neural Network Model of Crude Oil Distillation Column

    Directory of Open Access Journals (Sweden)

    Duraid F. Ahmed

    2016-02-01

    Full Text Available Artificial neural network in MATLAB simulator is used to model Baiji crude oil distillation unit based on data generated from aspen-HYSYS simulator. Thirteen inputs, six outputs and over 1487 data set are used to model the actual unit. Nonlinear autoregressive network with exogenous inputs (NARX and back propagation algorithm are used for training. Seventy percent of data are used for training the network while the remaining thirty percent are used for testing and validating the network to determine its prediction accuracy. One hidden layer and 34 hidden neurons are used for the proposed network with MSE of 0.25 is obtained. The number of neuron are selected based on less MSE for the network. The model founded to predict the optimal operating conditions for different objective functions within the training limit since ANN models are poor extrapolators. They are usually only reliable within the range of data that they had been trained for.

  19. Validity-Guided Fuzzy Clustering Evaluation for Neural Network-Based Time-Frequency Reassignment

    Directory of Open Access Journals (Sweden)

    Ahmad Khan Adnan

    2010-01-01

    Full Text Available Abstract This paper describes the validity-guided fuzzy clustering evaluation for optimal training of localized neural networks (LNNs used for reassigning time-frequency representations (TFRs. Our experiments show that the validity-guided fuzzy approach ameliorates the difficulty of choosing correct number of clusters and in conjunction with neural network-based processing technique utilizing a hybrid approach can effectively reduce the blur in the spectrograms. In the course of every partitioning problem the number of subsets must be given before the calculation, but it is rarely known apriori, in this case it must be searched also with using validity measures. Experimental results demonstrate the effectiveness of the approach.

  20. An autocatalytic network model for stock markets

    Science.gov (United States)

    Caetano, Marco Antonio Leonel; Yoneyama, Takashi

    2015-02-01

    The stock prices of companies with businesses that are closely related within a specific sector of economy might exhibit movement patterns and correlations in their dynamics. The idea in this work is to use the concept of autocatalytic network to model such correlations and patterns in the trends exhibited by the expected returns. The trends are expressed in terms of positive or negative returns within each fixed time interval. The time series derived from these trends is then used to represent the movement patterns by a probabilistic boolean network with transitions modeled as an autocatalytic network. The proposed method might be of value in short term forecasting and identification of dependencies. The method is illustrated with a case study based on four stocks of companies in the field of natural resource and technology.

  1. Artificial neural network modeling of p-cresol photodegradation.

    Science.gov (United States)

    Abdollahi, Yadollah; Zakaria, Azmi; Abbasiyannejad, Mina; Masoumi, Hamid Reza Fard; Moghaddam, Mansour Ghaffari; Matori, Khamirul Amin; Jahangirian, Hossein; Keshavarzi, Ashkan

    2013-06-03

    The complexity of reactions and kinetic is the current problem of photodegradation processes. Recently, artificial neural networks have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the non-linear relationships between variables in complex systems. In this study, an artificial neural network was applied for modeling p-cresol photodegradation. To optimize the network, the independent variables including irradiation time, pH, photocatalyst amount and concentration of p-cresol were used as the input parameters, while the photodegradation% was selected as output. The photodegradation% was obtained from the performance of the experimental design of the variables under UV irradiation. The network was trained by Quick propagation (QP) and the other three algorithms as a model. To determine the number of hidden layer nodes in the model, the root mean squared error of testing set was minimized. After minimizing the error, the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the Quick propagation algorithm had minimum root mean squared error, 1.3995, absolute average deviation, 3.0478, and maximum coefficient of determination, 0.9752, for the testing data set. The validation test results of the artificial neural network based on QP indicated that the root mean squared error was 4.11, absolute average deviation was 8.071 and the maximum coefficient of determination was 0.97. Artificial neural network based on Quick propagation algorithm with topology 4-10-1 gave the best performance in this study.

  2. Using virtual reality to validate system models

    Energy Technology Data Exchange (ETDEWEB)

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  3. Alternative Models of Collegiate Business Education: Their Validity and Implications.

    Science.gov (United States)

    Van Auken, Stuart; And Others

    1996-01-01

    Two models of management education are examined: the academic model, which treats the field of business as a science; and the professional model, which is responsive to the perceived needs of the business community. A study investigated the models' validity within the context of existing programs by 268 surveying program deans about their beliefs…

  4. Keystone Business Models for Network Security Processors

    Directory of Open Access Journals (Sweden)

    Arthur Low

    2013-07-01

    Full Text Available Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor” models nor the silicon intellectual-property licensing (“IP-licensing” models allow small technology companies to successfully compete. This article describes an alternative approach that produces an ongoing stream of novel network security processors for niche markets through continuous innovation by both large and small companies. This approach, referred to here as the "business ecosystem model for network security processors", includes a flexible and reconfigurable technology platform, a “keystone” business model for the company that maintains the platform architecture, and an extended ecosystem of companies that both contribute and share in the value created by innovation. New opportunities for business model innovation by participating companies are made possible by the ecosystem model. This ecosystem model builds on: i the lessons learned from the experience of the first author as a senior integrated circuit architect for providers of public-key cryptography solutions and as the owner of a semiconductor startup, and ii the latest scholarly research on technology entrepreneurship, business models, platforms, and business ecosystems. This article will be of interest to all technology entrepreneurs, but it will be of particular interest to owners of small companies that provide security solutions and to specialized security professionals seeking to launch their own companies.

  5. A Model of Mental State Transition Network

    Science.gov (United States)

    Xiang, Hua; Jiang, Peilin; Xiao, Shuang; Ren, Fuji; Kuroiwa, Shingo

    Emotion is one of the most essential and basic attributes of human intelligence. Current AI (Artificial Intelligence) research is concentrating on physical components of emotion, rarely is it carried out from the view of psychology directly(1). Study on the model of artificial psychology is the first step in the development of human-computer interaction. As affective computing remains unpredictable, creating a reasonable mental model becomes the primary task for building a hybrid system. A pragmatic mental model is also the fundament of some key topics such as recognition and synthesis of emotions. In this paper a Mental State Transition Network Model(2) is proposed to detect human emotions. By a series of psychological experiments, we present a new way to predict coming human's emotions depending on the various current emotional states under various stimuli. Besides, people in different genders and characters are taken into consideration in our investigation. According to the psychological experiments data derived from 200 questionnaires, a Mental State Transition Network Model for describing the transitions in distribution among the emotions and relationships between internal mental situations and external are concluded. Further more the coefficients of the mental transition network model were achieved. Comparing seven relative evaluating experiments, an average precision rate of 0.843 is achieved using a set of samples for the proposed model.

  6. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  7. Propagation models for computing biochemical reaction networks

    OpenAIRE

    Henzinger, Thomas A; Mateescu, Maria

    2011-01-01

    We introduce propagation models, a formalism designed to support general and efficient data structures for the transient analysis of biochemical reaction networks. We give two use cases for propagation abstract data types: the uniformization method and numerical integration. We also sketch an implementation of a propagation abstract data type, which uses abstraction to approximate states.

  8. Modelling crime linkage with Bayesian networks

    NARCIS (Netherlands)

    de Zoete, J.; Sjerps, M.; Lagnado, D.; Fenton, N.

    2015-01-01

    When two or more crimes show specific similarities, such as a very distinct modus operandi, the probability that they were committed by the same offender becomes of interest. This probability depends on the degree of similarity and distinctiveness. We show how Bayesian networks can be used to model

  9. Lagrangian modeling of switching electrical networks

    NARCIS (Netherlands)

    Scherpen, Jacquelien M.A.; Jeltsema, Dimitri; Klaassens, J. Ben

    2003-01-01

    In this paper, a general and systematic method is presented to model topologically complete electrical networks, with or without multiple or single switches, within the Euler–Lagrange framework. Apart from the physical insight that can be obtained in this way, the framework has proven to be useful

  10. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  11. Modeling Network Transition Constraints with Hypergraphs

    DEFF Research Database (Denmark)

    Harrod, Steven

    2011-01-01

    values. A directed hypergraph formulation is derived to address railway network sequencing constraints, and an experimental problem sample solved to estimate the magnitude of objective inflation when interaction effects are ignored. The model is used to demonstrate the value of advance scheduling...

  12. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  13. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm

    Directory of Open Access Journals (Sweden)

    Sudip Mandal

    2016-01-01

    Full Text Available The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  14. A neural network model for texture discrimination.

    Science.gov (United States)

    Xing, J; Gerstein, G L

    1993-01-01

    A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context.

  15. Communicating systems with UML 2 modeling and analysis of network protocols

    CERN Document Server

    Barrera, David Garduno

    2013-01-01

    This book gives a practical approach to modeling and analyzing communication protocols using UML 2. Network protocols are always presented with a point of view focusing on partial mechanisms and starting models. This book aims at giving the basis needed for anybody to model and validate their own protocols. It follows a practical approach and gives many examples for the description and analysis of well known basic network mechanisms for protocols.The book firstly shows how to describe and validate the main protocol issues (such as synchronization problems, client-server interactions, layer

  16. Propagating semantic information in biochemical network models

    Directory of Open Access Journals (Sweden)

    Schulz Marvin

    2012-01-01

    Full Text Available Abstract Background To enable automatic searches, alignments, and model combination, the elements of systems biology models need to be compared and matched across models. Elements can be identified by machine-readable biological annotations, but assigning such annotations and matching non-annotated elements is tedious work and calls for automation. Results A new method called "semantic propagation" allows the comparison of model elements based not only on their own annotations, but also on annotations of surrounding elements in the network. One may either propagate feature vectors, describing the annotations of individual elements, or quantitative similarities between elements from different models. Based on semantic propagation, we align partially annotated models and find annotations for non-annotated model elements. Conclusions Semantic propagation and model alignment are included in the open-source library semanticSBML, available on sourceforge. Online services for model alignment and for annotation prediction can be used at http://www.semanticsbml.org.

  17. Model Validation and Verification of Data Mining from the ...

    African Journals Online (AJOL)

    Michael Horsfall

    In this paper, we seek to present a hybrid method for Model Validation and Verification of Data Mining .... obtained, program design and development procedures .... instrument. Cronbach's alpha was used to calculate the questionnaire's reliability and validity. For analyzing the present research's information, both descriptive ...

  18. Validation study of the mine fire simulation model

    Energy Technology Data Exchange (ETDEWEB)

    Wala, A.M.; Dziurzynski, W.; Tracz, J.; Wooton, D.

    1995-12-31

    The purpose of this paper is to present validation studies of the mine-fire simulator using data gathered from an actual mine fire which occurred in November 1991 at the Pattiki Mine. This study evaluates the suitability of the computer software package for modeling underground fires. The paper also discusses the importance of the on-line monitoring system for the validation process.

  19. Adolescent Personality: A Five-Factor Model Construct Validation

    Science.gov (United States)

    Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…

  20. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  1. A Practical Approach to Validating a PD Model

    NARCIS (Netherlands)

    Medema, L.; Koning, de R.; Lensink, B.W.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  2. A practical approach to validating a PD model

    NARCIS (Netherlands)

    Medema, Lydian; Koning, Ruud H.; Lensink, Robert; Medema, M.

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  3. Distributed Bayesian Networks for User Modeling

    DEFF Research Database (Denmark)

    Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...... efficiently combines distributed learner models without the need to exchange internal structure of local Bayesian networks, nor local evidence between the involved platforms....

  4. Network traffic model using GIPP and GIBP

    Science.gov (United States)

    Lee, Yong Duk; Van de Liefvoort, Appie; Wallace, Victor L.

    1998-10-01

    In telecommunication networks, the correlated nature of teletraffic patterns can have significant impact on queuing measures such as queue length, blocking and delay. There is, however, not yet a good general analytical description which can easily incorporate the correlation effect of the traffic, while at the same time maintaining the ease of modeling. The authors have shown elsewhere, that the covariance structures of the generalized Interrupted Poisson Process (GIPP) and the generalized Interrupted Bernoulli Process (GIBP) have an invariance property which makes them reasonably general, yet algebraically manageable, models for representing correlated network traffic. The GIPP and GIBP have a surprisingly rich sets of parameters, yet these invariance properties enable us to easily incorporate the covariance function as well as the interarrival time distribution into the model to better matchobservations. In this paper, we show an application of GIPP and GIBP for matching an analytical model to observed or experimental data.

  5. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, Irina S.; Shprits, Yuri Y.; Spasojević, Maria

    2017-11-01

    We present the PINE (Plasma density in the Inner magnetosphere Neural network-based Empirical) model - a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of 1 October 2012 to 1 July 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2≤L≤6 and all local times. We validate and test the model by measuring its performance on independent data sets withheld from the training set and by comparing the model-predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). The optimal model is based on the 96 h time history of Kp, AE, SYM-H, and F10.7 indices. The model successfully reproduces erosion of the plasmasphere on the nightside and plume formation and evolution. We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in situ observations by using machine learning techniques.

  6. Development of a pore network simulation model to study nonaqueous phase liquid dissolution

    Science.gov (United States)

    Dillard, Leslie A.; Blunt, Martin J.

    2000-01-01

    A pore network simulation model was developed to investigate the fundamental physics of nonequilibrium nonaqueous phase liquid (NAPL) dissolution. The network model is a lattice of cubic chambers and rectangular tubes that represent pore bodies and pore throats, respectively. Experimental data obtained by Powers [1992] were used to develop and validate the model. To ensure the network model was representative of a real porous medium, the pore size distribution of the network was calibrated by matching simulated and experimental drainage and imbibition capillary pressure-saturation curves. The predicted network residual styrene blob-size distribution was nearly identical to the observed distribution. The network model reproduced the observed hydraulic conductivity and produced relative permeability curves that were representative of a poorly consolidated sand. Aqueous-phase transport was represented by applying the equation for solute flux to the network tubes and solving for solute concentrations in the network chambers. Complete mixing was found to be an appropriate approximation for calculation of chamber concentrations. Mass transfer from NAPL blobs was represented using a corner diffusion model. Predicted results of solute concentration versus Peclet number and of modified Sherwood number versus Peclet number for the network model compare favorably with experimental data for the case in which NAPL blob dissolution was negligible. Predicted results of normalized effluent concentration versus pore volume for the network were similar to the experimental data for the case in which NAPL blob dissolution occurred with time.

  7. 3D hybrid modelling of vascular network formation.

    Science.gov (United States)

    Perfahl, Holger; Hughes, Barry D; Alarcón, Tomás; Maini, Philip K; Lloyd, Mark C; Reuss, Matthias; Byrne, Helen M

    2017-02-07

    sprouting probability. Glyphs that simultaneously depict several network properties are introduced to show how these and other network quantities change over time and also as model parameters vary. We also show how equivalent glyphs constructed from in vivo data could be used to discriminate between normal and tumour vasculature and, in the longer term, for model validation. We conclude that our biomechanical hybrid model can generate vascular networks that are qualitatively similar to those generated from in vitro and in vivo experiments. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  9. Modeling Multistandard Wireless Networks in OPNET

    DEFF Research Database (Denmark)

    Zakrzewska, Anna; Berger, Michael Stübert; Ruepp, Sarah Renée

    2011-01-01

    Future wireless communication is emerging towards one heterogeneous platform. In this new environment wireless access will be provided by multiple radio technologies that are cooperating and complementing one another. The paper investigates the possibilities of developing such a multistandard...... system using OPNET Modeler. A network model consisting of LTE interworking with WLAN and WiMAX is considered from the radio resource management perspective. In particular, implementing a joint packet scheduler across multiple systems is discussed more in detail....

  10. Modelling dendritic ecological networks in space: anintegrated network perspective

    Science.gov (United States)

    Peterson, Erin E.; Ver Hoef, Jay M.; Isaak, Dan J.; Falke, Jeffrey A.; Fortin, Marie-Josée; Jordon, Chris E.; McNyset, Kristina; Monestiez, Pascal; Ruesch, Aaron S.; Sengupta, Aritra; Som, Nicholas; Steel, E. Ashley; Theobald, David M.; Torgersen, Christian E.; Wenger, Seth J.

    2013-01-01

    Dendritic ecological networks (DENs) are a unique form of ecological networks that exhibit a dendritic network topology (e.g. stream and cave networks or plant architecture). DENs have a dual spatial representation; as points within the network and as points in geographical space. Consequently, some analytical methods used to quantify relationships in other types of ecological networks, or in 2-D space, may be inadequate for studying the influence of structure and connectivity on ecological processes within DENs. We propose a conceptual taxonomy of network analysis methods that account for DEN characteristics to varying degrees and provide a synthesis of the different approaches within

  11. Ion channel model development and validation

    Science.gov (United States)

    Nelson, Peter Hugo

    2010-03-01

    The structure of the KcsA ion channel selectivity filter is used to develop three simple models of ion channel permeation. The quantitative predictions of the knock-on model are tested by comparison with experimental data from single-channel recordings of the KcsA channel. By comparison with experiment, students discover that the knock-on model can't explain saturation of ion channel current as the concentrations of the bathing solutions are increased. By inverting the energy diagram, students derive the association-dissociation model of ion channel permeation. This model predicts non-linear Michaelis-Menten saturating behavior that requires students to perform non-linear least-squares fits to the experimental data. This is done using Excel's solver feature. Students discover that this simple model does an excellent job of explaining the qualitative features of ion channel permeation but cannot account for changes in voltage sensitivity. The model is then extended to include an electrical dissociation distance. This rapid translocation model is then compared with experimental data from a wide variety of ion channels and students discover that this model also has its limitations. Support from NSF DUE 0836833 is gratefully acknowledged.

  12. Spatial Models and Networks of Living Systems

    DEFF Research Database (Denmark)

    Juul, Jeppe Søgaard

    When studying the dynamics of living systems, insight can often be gained by developing a mathematical model that can predict future behaviour of the system or help classify system characteristics. However, in living cells, organisms, and especially groups of interacting individuals, a large number....... Such systems are known to be stabilized by spatial structure. Finally, I analyse data from a large mobile phone network and show that people who are topologically close in the network have similar communication patterns. This main part of the thesis is based on six different articles, which I have co...

  13. A validated physical model of greenhouse climate.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the

  14. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect inc...

  15. Neural Network Model of memory retrieval

    Directory of Open Access Journals (Sweden)

    Stefano eRecanatesi

    2015-12-01

    Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.

  16. Validation of the Social Networking Activity Intensity Scale among Junior Middle School Students in China.

    Directory of Open Access Journals (Sweden)

    Jibin Li

    Full Text Available Online social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS and validate it among junior middle school students in China.A total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods.Two factors, Social Function Use Intensity (SFUI and Entertainment Function Use Intensity (EFUI, were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001. As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use.The SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.

  17. Biological Dosimetry by the Triage Dicentric Chromosome Assay – Further validation of International Networking

    Science.gov (United States)

    Wilkins, Ruth C.; Romm, Horst; Oestreicher, Ursula; Marro, Leonora; Yoshida, Mitsuaki A.; Suto, Y.; Prasanna, Pataje G.S.

    2011-01-01

    Biological dosimetry is an essential tool for estimating radiation doses received to personnel when physical dosimetry is not available or inadequate. The current preferred biodosimetry method is based on the measurement of radiation-specific dicentric chromosomes in exposed individuals' peripheral blood lymphocytes. However, this method is labour-, time- and expertise-demanding. Consequently, for mass casualty applications, strategies have been developed to increase its throughput. One such strategy is to develop validated cytogenetic biodosimetry laboratory networks, both national and international. In a previous study, the dicentric chromosome assay (DCA) was validated in our cytogenetic biodosimetry network involving five geographically dispersed laboratories. A complementary strategy to further enhance the throughput of the DCA among inter-laboratory networks is to use a triage DCA where dose assessments are made by truncating the labour-demanding and time-consuming metaphase-spread analysis to 20 to 50 metaphase spreads instead of routine 500 to 1000 metaphase spread analysis. Our laboratory network also validated this triage DCA, however, these dose estimates were made using calibration curves generated in each laboratory from the blood samples irradiated in a single laboratory. In an emergency situation, dose estimates made using pre-existing calibration curves which may vary according to radiation type and dose rate and therefore influence the assessed dose. Here, we analyze the effect of using a pre-existing calibration curve on assessed dose among our network laboratories. The dose estimates were made by analyzing 1000 metaphase spreads as well as triage quality scoring and compared to actual physical doses applied to the samples for validation. The dose estimates in the laboratory partners were in good agreement with the applied physical doses and determined to be adequate for guidance in the treatment of acute radiation syndrome. PMID:21949482

  18. A model of traffic signs recognition with convolutional neural network

    Science.gov (United States)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  19. Model-Driven Approach for Body Area Network Application Development

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  20. Model-Driven Approach for Body Area Network Application Development.

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-05-12

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  1. Fuzzy stochastic neural network model for structural system identification

    Science.gov (United States)

    Jiang, Xiaomo; Mahadevan, Sankaran; Yuan, Yong

    2017-01-01

    This paper presents a dynamic fuzzy stochastic neural network model for nonparametric system identification using ambient vibration data. The model is developed to handle two types of imprecision in the sensed data: fuzzy information and measurement uncertainties. The dimension of the input vector is determined by using the false nearest neighbor approach. A Bayesian information criterion is applied to obtain the optimum number of stochastic neurons in the model. A fuzzy C-means clustering algorithm is employed as a data mining tool to divide the sensed data into clusters with common features. The fuzzy stochastic model is created by combining the fuzzy clusters of input vectors with the radial basis activation functions in the stochastic neural network. A natural gradient method is developed based on the Kullback-Leibler distance criterion for quick convergence of the model training. The model is validated using a power density pseudospectrum approach and a Bayesian hypothesis testing-based metric. The proposed methodology is investigated with numerically simulated data from a Markov Chain model and a two-story planar frame, and experimentally sensed data from ambient vibration data of a benchmark structure.

  2. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  3. Model-driven description and validation of composite learning content

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2010-01-01

    Authoring of learning content for courseware systems is a complex activity requiring the combination of a range of design and validation techniques. We introduce the CAVIAr courseware models allowing for learning content description and validation. Model-based representation and analysis of different concerns such as the subject domain, learning context, resources and instructional design used are key contributors to this integrated solution. Personalised learning is particularly difficult to...

  4. HEDR model validation plan. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ``tools`` for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ``validate`` these tools. In the sense of the HEDR Project, ``validation`` is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model.

  5. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    Science.gov (United States)

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  6. Development and validation of model for sand

    Directory of Open Access Journals (Sweden)

    Church P.

    2015-01-01

    Full Text Available There is a growing requirement within QinetiQ to develop models for assessments when there is very little experimental data. A theoretical approach to developing equations of state for geological materials has been developed using Quantitative Structure Property Modelling based on the Porter-Gould model approach. This has been applied to well-controlled sand with different moisture contents and particle shapes. The Porter-Gould model describes an elastic response and gives good agreement at high impact pressures with experiment indicating that the response under these conditions is dominated by the molecular response. However at lower pressures the compaction behaviour is dominated by a micro-mechanical response which drives the need for additional theoretical tools and experiments to separate the volumetric and shear compaction behaviour. The constitutive response is fitted to existing triaxial cell data and Quasi-Static (QS compaction data. This data is then used to construct a model in the hydrocode. The model shows great promise in predicting plate impact, Hopkinson bar, fragment penetration and residual velocity of fragments through a finite thickness of sand.

  7. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...... with the simulation prediction, showing the validity of the algorithm....

  8. Control of uncertain systems by feedback linearization with neural networks augmentation. Part II. Controller validation by numerical simulation

    Directory of Open Access Journals (Sweden)

    Adrian TOADER

    2010-09-01

    Full Text Available The paper was conceived in two parts. Part I, previously published in this journal, highlighted the main steps of adaptive output feedback control for non-affine uncertain systems, having a known relative degree. The main paradigm of this approach was the feedback linearization (dynamic inversion with neural network augmentation. Meanwhile, based on new contributions of the authors, a new paradigm, that of robust servomechanism problem solution, has been added to the controller architecture. The current Part II of the paper presents the validation of the controller hereby obtained by using the longitudinal channel of a hovering VTOL-type aircraft as mathematical model.

  9. Validation of Model Forecasts of the Ambient Solar Wind

    Science.gov (United States)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  10. Building the Bridge between Operations and Outcomes : Modelling and Evaluation of Health Service Provider Networks

    NARCIS (Netherlands)

    M. Mahdavi (Mahdi)

    2015-01-01

    markdownabstract__Abstract__ The PhD research has two objectives: - To develop generally applicable operational models which allow developing the evidence base for health service operations in provider networks. - To contribute to the evidence base by validating the model through

  11. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  12. Network evolution model for supply chain with manufactures as the core

    Science.gov (United States)

    Jiang, Dali; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model. PMID:29370201

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  15. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  16. Distinguishing Valid from Invalid Causal Indicator Models

    Science.gov (United States)

    Cadogan, John W.; Lee, Nick

    2016-01-01

    In this commentary from Issue 14, n3, authors John Cadogan and Nick Lee applaud the paper by Aguirre-Urreta, Rönkkö, and Marakas "Measurement: Interdisciplinary Research and Perspectives", 14(3), 75-97 (2016), since their explanations and simulations work toward demystifying causal indicator models, which are often used by scholars…

  17. Kinematic Structural Modelling in Bayesian Networks

    Science.gov (United States)

    Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.

    2017-04-01

    We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In

  18. Systems biology of plant molecular networks: from networks to models

    NARCIS (Netherlands)

    Valentim, F.L.

    2015-01-01

    Developmental processes are controlled by regulatory networks (GRNs), which are tightly coordinated networks of transcription factors (TFs) that activate and repress gene expression within a spatial and temporal context. In Arabidopsis thaliana, the key components and network structures of the GRNs

  19. Modeling and optimization of Quality of Service routing in Mobile Ad hoc Networks

    Directory of Open Access Journals (Sweden)

    Rafsanjani Marjan Kuchaki

    2016-01-01

    Full Text Available Mobile ad hoc networks (MANETs are a group of mobile nodes that are connected without using a fixed infrastructure. In these networks, nodes communicate with each other by forming a single-hop or multi-hop network. To design effective mobile ad hoc networks, it is important to evaluate the performance of multi-hop paths. In this paper, we present a mathematical model for a routing protocol under energy consumption and packet delivery ratio of multi-hop paths. In this model, we use geometric random graphs rather than random graphs. Our proposed model finds effective paths that minimize the energy consumption and maximizes the packet delivery ratio of the network. Validation of the mathematical model is performed through simulation.

  20. A generalized and parameterized interference model for cognitive radio networks

    KAUST Repository

    Mahmood, Nurul Huda

    2011-06-01

    For meaningful co-existence of cognitive radios with primary system, it is imperative that the cognitive radio system is aware of how much interference it generates at the primary receivers. This can be done through statistical modeling of the interference as perceived at the primary receivers. In this work, we propose a generalized model for the interference generated by a cognitive radio network, in the presence of small and large scale fading, at a primary receiver located at the origin. We then demonstrate how this model can be used to estimate the impact of cognitive radio transmission on the primary receiver in terms of different outage probabilities. Finally, our analytical findings are validated through some selected computer-based simulations. © 2011 IEEE.

  1. A Bayesian Network View on Nested Effects Models

    Directory of Open Access Journals (Sweden)

    Achim Tresch

    2009-01-01

    Full Text Available Nested effects models (NEMs are a class of probabilistic models that were designed to reconstruct a hidden signalling structure from a large set of observable effects caused by active interventions into the signalling pathway. We give a more flexible formulation of NEMs in the language of Bayesian networks. Our framework constitutes a natural generalization of the original NEM model, since it explicitly states the assumptions that are tacitly underlying the original version. Our approach gives rise to new learning methods for NEMs, which have been implemented in the R/Bioconductor package nem. We validate these methods in a simulation study and apply them to a synthetic lethality dataset in yeast.

  2. Advances in dynamic network modeling in complex transportation systems

    CERN Document Server

    Ukkusuri, Satish V

    2013-01-01

    This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.

  3. A NEURAL OSCILLATOR-NETWORK MODEL OF TEMPORAL PATTERN GENERATION

    NARCIS (Netherlands)

    Schomaker, Lambert

    Most contemporary neural network models deal with essentially static, perceptual problems of classification and transformation. Models such as multi-layer feedforward perceptrons generally do not incorporate time as an essential dimension, whereas biological neural networks are inherently temporal

  4. Gene regulatory network inference and validation using relative change ratio analysis and time-delayed dynamic Bayesian network.

    Science.gov (United States)

    Li, Peng; Gong, Ping; Li, Haoni; Perkins, Edward J; Wang, Nan; Zhang, Chaoyang

    2014-12-01

    The Dialogue for Reverse Engineering Assessments and Methods (DREAM) project was initiated in 2006 as a community-wide effort for the development of network inference challenges for rigorous assessment of reverse engineering methods for biological networks. We participated in the in silico network inference challenge of DREAM3 in 2008. Here we report the details of our approach and its performance on the synthetic challenge datasets. In our methodology, we first developed a model called relative change ratio (RCR), which took advantage of the heterozygous knockdown data and null-mutant knockout data provided by the challenge, in order to identify the potential regulators for the genes. With this information, a time-delayed dynamic Bayesian network (TDBN) approach was then used to infer gene regulatory networks from time series trajectory datasets. Our approach considerably reduced the searching space of TDBN; hence, it gained a much higher efficiency and accuracy. The networks predicted using our approach were evaluated comparatively along with 29 other submissions by two metrics (area under the ROC curve and area under the precision-recall curve). The overall performance of our approach ranked the second among all participating teams.

  5. Disease gene prioritization by integrating tissue-specific molecular networks using a robust multi-network model.

    Science.gov (United States)

    Ni, Jingchao; Koyuturk, Mehmet; Tong, Hanghang; Haines, Jonathan; Xu, Rong; Zhang, Xiang

    2016-11-10

    recover true associations more accurately than other methods in terms of AUC values, and the performance differences are significant (with paired t-test p-values less than 0.05). This validates the importance to integrate tissue-specific molecular networks for studying disease gene prioritization and show the superiority of our network models and ranking algorithms toward this purpose. The source code and datasets are available at http://nijingchao.github.io/CRstar/ .

  6. Experiments for foam model development and validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F. (Honeywell Federal Manufacturing and Technologies, Kansas City Plant, Kansas City, MO); Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  7. Finite element model validation of bridge based on structural health monitoring—Part II: Uncertainty propagation and model validation

    Directory of Open Access Journals (Sweden)

    Xiaosong Lin

    2015-08-01

    Full Text Available Because of uncertainties involved in modeling, construction, and measurement systems, the assessment of the FE model validation must be conducted based on stochastic measurements to provide designers with confidence for further applications. In this study, based on the updated model using response surface methodology, a practical model validation methodology via uncertainty propagation is presented. Several criteria of testing/analysis correlation are introduced, and the sources of model and testing uncertainties are also discussed. After that, Monte Carlo stochastic finite element (FE method is employed to perform the uncertainty quantification and propagation. The proposed methodology is illustrated with the examination of the validity of a large-span prestressed concrete continuous rigid frame bridge monitored under operational conditions. It can be concluded that the calculated frequencies and vibration modes of the updated FE model of Xiabaishi Bridge are consistent with the measured ones. The relative errors of each frequency are all less than 3.7%. Meanwhile, the overlap ratio indexes of each frequency are all more than 75%; The MAC values of each calculated vibration frequency are all more than 90%. The model of Xiabaishi Bridge is valid in the whole operation space including experimental design space, and its confidence level is upper than 95%. The validated FE model of Xiabaishi Bridge can reflect the current condition of Xiabaishi Bridge, and also can be used as basis of bridge health monitoring, damage identification and safety assessment.

  8. A soil moisture and temperature network for SMOS validation in Western Denmark

    Directory of Open Access Journals (Sweden)

    S. Bircher

    2012-05-01

    Full Text Available The Soil Moisture and Ocean Salinity Mission (SMOS acquires surface soil moisture data of global coverage every three days. Product validation for a range of climate and environmental conditions across continents is a crucial step. For this purpose, a soil moisture and soil temperature sensor network was established in the Skjern River Catchment, Denmark. The objectives of this article are to describe a method to implement a network suited for SMOS validation, and to present sample data collected by the network to verify the approach. The design phase included (1 selection of a single SMOS pixel (44 × 44 km, which is representative of the land surface conditions of the catchment and with minimal impact from open water (2 arrangement of three network clusters along the precipitation gradient, and (3 distribution of the stations according to respective fractions of classes representing the prevailing environmental conditions. Overall, measured moisture and temperature patterns could be related to the respective land cover and soil conditions. Texture-dependency of the 0–5 cm soil moisture measurements was demonstrated. Regional differences in 0–5 cm soil moisture, temperature and precipitation between the north-east and south-west were found to be small. A first comparison between the 0–5 cm network averages and the SMOS soil moisture (level 2 product is in range with worldwide validation results, showing comparable trends for SMOS retrieved soil moisture (R2 of 0.49 as well as initial soil moisture and temperature from ECMWF used in the retrieval algorithm (R2 of 0.67 and 0.97, respectively. While retrieved/initial SMOS soil moisture indicate significant under-/overestimation of the network data (biases of −0.092/0.057 m3 m−3, the initial temperature is in good agreement (bias of −0.2 °C. Based on these findings, the network performs according to expectations and proves to be

  9. Model of Opinion Spreading in Social Networks

    CERN Document Server

    Kanovsky, Igor

    2011-01-01

    We proposed a new model, which capture the main difference between information and opinion spreading. In information spreading additional exposure to certain information has a small effect. Contrary, when an actor is exposed to 2 opinioned actors the probability to adopt the opinion is significant higher than in the case of contact with one such actor (called by J. Kleinberg "the 0-1-2 effect"). In each time step if an actor does not have an opinion, we randomly choose 2 his network neighbors. If one of them has an opinion, the actor adopts opinion with some low probability, if two - with a higher probability. Opinion spreading was simulated on different real world social networks and similar random scale-free networks. The results show that small world structure has a crucial impact on tipping point time. The "0-1-2" effect causes a significant difference between ability of the actors to start opinion spreading. Actor is an influencer according to his topological position in the network.

  10. Validity of skin cancer malignancy reporting to the Organ Procurement Transplant Network: A cohort study.

    Science.gov (United States)

    Garrett, Giorgia L; Yuan, Joyce T; Shin, Thuzar M; Arron, Sarah T

    2018-02-01

    The Organ Procurement Transplant Network (OPTN) registry collects data on posttransplant malignancies in solid organ transplant recipients. Complete and accurate registry data on skin cancer is critical for research on epidemiology and interventions. The study goal was to determine the validity of Organ Procurement Transplant Network skin cancer data. This cohort study compared reporting of posttransplant squamous cell carcinoma (SCC) and malignant melanoma (MM) in OPTN to medical-record review-derived data from the Transplant Skin Cancer Network (TSCN) database. In total, 4934 organ transplant recipients from the TSCN database were linked to patient-level OPTN malignancy data. We calculated sensitivity, specificity, correct classification (CC), positive predictive value (PPV), and negative predictive value (NPV) for SCC and MM reporting in the OPTN database. OPTN reporting for SCC (population prevalence 11%) had sensitivity 41%, specificity 99%, PPV 88%, NPV 93%, and CC 93%. OPTN reporting for MM (population prevalence 1%) had sensitivity 22%, specificity 100%, PPV 73%, NPV 99%, and CC 99%. Only a subset of patients in the TSCN cohort had matched United Network for Organ Sharing cancer registry data for comparison. OPTN reporting had poor sensitivity but excellent specificity for SCC and MM. Dermatologists and transplant physicians are encouraged to improve the validity of OPTN skin cancer data through improved communication and reporting. Published by Elsevier Inc.

  11. A Comparison of Geographic Information Systems, Complex Networks, and Other Models for Analyzing Transportation Network Topologies

    Science.gov (United States)

    Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher

    2005-01-01

    This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.

  12. Towards a model-based development approach for wireless sensor-actuator network protocols

    DEFF Research Database (Denmark)

    Kumar S., A. Ajith; Simonsen, Kent Inge

    2014-01-01

    Model-Driven Software Engineering (MDSE) is a promising approach for the development of applications, and has been well adopted in the embedded applications domain in recent years. Wireless Sensor Actuator Networks consisting of resource constrained hardware and platformspecific operating system...... induced due to manual translations. With the use of formal semantics in the modeling approach, we can further ensure the correctness of the source model by means of verification. Also, with the use of network simulators and formal modeling tools, we obtain a verified and validated model to be used...

  13. Mathematical model for spreading dynamics of social network worms

    Science.gov (United States)

    Sun, Xin; Liu, Yan-Heng; Li, Bin; Li, Jin; Han, Jia-Wei; Liu, Xue-Jie

    2012-04-01

    In this paper, a mathematical model for social network worm spreading is presented from the viewpoint of social engineering. This model consists of two submodels. Firstly, a human behavior model based on game theory is suggested for modeling and predicting the expected behaviors of a network user encountering malicious messages. The game situation models the actions of a user under the condition that the system may be infected at the time of opening a malicious message. Secondly, a social network accessing model is proposed to characterize the dynamics of network users, by which the number of online susceptible users can be determined at each time step. Several simulation experiments are carried out on artificial social networks. The results show that (1) the proposed mathematical model can well describe the spreading dynamics of social network worms; (2) weighted network topology greatly affects the spread of worms; (3) worms spread even faster on hybrid social networks.

  14. Modeling regulatory networks with weight matrices

    DEFF Research Database (Denmark)

    Weaver, D.C.; Workman, Christopher; Stormo, Gary D.

    1999-01-01

    Systematic gene expression analyses provide comprehensive information about the transcriptional responseto different environmental and developmental conditions. With enough gene expression data points,computational biologists may eventually generate predictive computer models of transcription...... regulation.Such models will require computational methodologies consistent with the behavior of known biologicalsystems that remain tractable. We represent regulatory relationships between genes as linear coefficients orweights, with the "net" regulation influence on a gene's expression being...... the mathematical summation of theindependent regulatory inputs. Test regulatory networks generated with this approach display stable andcyclically stable gene expression levels, consistent with known biological systems. We include variables tomodel the effect of environmental conditions on transcription regulation...

  15. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    The application of neural networks to model a laboratory scale inverse fluidized bed reactor has been studied. A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological ...

  16. Modeling social influence through network autocorrelation : constructing the weight matrix

    NARCIS (Netherlands)

    Leenders, Roger Th. A. J.

    Many physical and social phenomena are embedded within networks of interdependencies, the so-called 'context' of these phenomena. In network analysis, this type of process is typically modeled as a network autocorrelation model. Parameter estimates and inferences based on autocorrelation models,

  17. Validation of an Efficient Outdoor Sound Propagation Model Using BEM

    DEFF Research Database (Denmark)

    Quirós-Alpera, S.; Henriquez, Vicente Cutanda; Jacobsen, Finn

    2001-01-01

    An approximate, simple and practical model for prediction of outdoor sound propagation exists based on ray theory, diffraction theory and Fresnel-zone considerations [1]. This model, which can predict sound propagation over non-flat terrain, has been validated for combinations of flat ground, hills...... and barriers, but it still needs to be validated for configurations that involve combinations of valleys and barriers. In order to do this a boundary element model has been implemented in MATLAB to serve as a reliable reference....

  18. Predictive models of safety based on audit findings: Part 2: Measurement of model validity.

    Science.gov (United States)

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-07-01

    Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  19. Reverse electrodialysis : A validated process model for design and optimization

    NARCIS (Netherlands)

    Veerman, J.; Saakes, M.; Metz, S. J.; Harmsen, G. J.

    2011-01-01

    Reverse electrodialysis (RED) is a technology to generate electricity using the entropy of the mixing of sea and river water. A model is made of the RED process and validated experimentally. The model is used to design and optimize the RED process. It predicts very small differences between counter-

  20. Landslide Tsunami Generation Models: Validation and Case Studies

    Science.gov (United States)

    Watts, P.; Grilli, S. T.; Kirby, J. T.; Fryer, G. J.; Tappin, D. R.

    2002-12-01

    There has been a proliferation of landslide tsunami generation and propagation models in recent time, spurred largely by the 1998 Papua New Guinea event. However, few of these models or techniques have been carefully validated. Moreover, few of these models have proven capable of integrating the best available geological data and interpretations into convincing case studies. The Tsunami Open and Progressive Initial Conditions System (TOPICS) rapidly provides approximate landslide tsunami sources for tsunami propagation models. We present 3D laboratory experiments and 3D Boundary Element Method simulations that validate the tsunami sources given by TOPICS. Geowave is a combination of TOPICS with the fully nonlinear and dispersive Boussinesq model FUNWAVE, which has been the subject of extensive testing and validation over the course of the last decade. Geowave is currently a tsunami community model made available to all tsunami researchers on the web site www.tsunamicommunity.org. We validate Geowave with case studies of the 1946 Unimak, Alaska, the 1994 Skagway, Alaska, and the 1998 Papua New Guinea events. The benefits of Boussinesq wave propagation over traditional shallow water wave models is very apparent for these relatively steep and nonlinear waves. For the first time, a tsunami community model appear sufficiently powerful to reproduce all observations and records with the first numerical simulation. This can only be accomplished by first assembling geological data and interpretations into a reasonable tsunami source.

  1. Validating Finite Element Models of Assembled Shell Structures

    Science.gov (United States)

    Hoff, Claus

    2006-01-01

    The validation of finite element models of assembled shell elements is presented. The topics include: 1) Problems with membrane rotations in assembled shell models; 2) Penalty stiffness for membrane rotations; 3) Physical stiffness for membrane rotations using shell elements with 6 dof per node; and 4) Connections avoiding rotations.

  2. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  3. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    Science.gov (United States)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  4. Specification and Estimation of Network Formation and Network Interaction Models with the Exponential Probability Distribution

    OpenAIRE

    Hsieh, Chih-Sheng; Lee, Lung fei

    2017-01-01

    In this paper, we model network formation and network interactions under a unified framework. The key feature of our model is to allow individuals to respond to incentives stemming from interaction benefits on certain activities when they choose friends (network links), while capturing homophily in terms of unobserved characteristic variables in network formation and activities. There are two advantages of this modeling approach: first, one can evaluate whether incentives from certain interac...

  5. A joint model of regulatory and metabolic networks

    Directory of Open Access Journals (Sweden)

    Vingron Martin

    2006-07-01

    Full Text Available Abstract Background Gene regulation and metabolic reactions are two primary activities of life. Although many works have been dedicated to study each system, the coupling between them is less well understood. To bridge this gap, we propose a joint model of gene regulation and metabolic reactions. Results We integrate regulatory and metabolic networks by adding links specifying the feedback control from the substrates of metabolic reactions to enzyme gene expressions. We adopt two alternative approaches to build those links: inferring the links between metabolites and transcription factors to fit the data or explicitly encoding the general hypotheses of feedback control as links between metabolites and enzyme expressions. A perturbation data is explained by paths in the joint network if the predicted response along the paths is consistent with the observed response. The consistency requirement for explaining the perturbation data imposes constraints on the attributes in the network such as the functions of links and the activities of paths. We build a probabilistic graphical model over the attributes to specify these constraints, and apply an inference algorithm to identify the attribute values which optimally explain the data. The inferred models allow us to 1 identify the feedback links between metabolites and regulators and their functions, 2 identify the active paths responsible for relaying perturbation effects, 3 computationally test the general hypotheses pertaining to the feedback control of enzyme expressions, 4 evaluate the advantage of an integrated model over separate systems. Conclusion The modeling results provide insight about the mechanisms of the coupling between the two systems and possible "design rules" pertaining to enzyme gene regulation. The model can be used to investigate the less well-probed systems and generate consistent hypotheses and predictions for further validation.

  6. On the development and validation of QSAR models.

    Science.gov (United States)

    Gramatica, Paola

    2013-01-01

    The fundamental and more critical steps that are necessary for the development and validation of QSAR models are presented in this chapter as best practices in the field. These procedures are discussed in the context of predictive QSAR modelling that is focused on achieving models of the highest statistical quality and with external predictive power. The most important and most used statistical parameters needed to verify the real performances of QSAR models (of both linear regression and classification) are presented. Special emphasis is placed on the validation of models, both internally and externally, as well as on the need to define model applicability domains, which should be done when models are employed for the prediction of new external compounds.

  7. Challenges on Probabilistic Modeling for Evolving Networks

    OpenAIRE

    Ding, Jianguo; Bouvry, Pascal

    2013-01-01

    With the emerging of new networks, such as wireless sensor networks, vehicle networks, P2P networks, cloud computing, mobile Internet, or social networks, the network dynamics and complexity expands from system design, hardware, software, protocols, structures, integration, evolution, application, even to business goals. Thus the dynamics and uncertainty are unavoidable characteristics, which come from the regular network evolution and unexpected hardware defects, unavoidable software errors,...

  8. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  9. Capacity Model and Constraints Analysis for Integrated Remote Wireless Sensor and Satellite Network in Emergency Scenarios.

    Science.gov (United States)

    Zhang, Wei; Zhang, Gengxin; Dong, Feihong; Xie, Zhidong; Bian, Dongming

    2015-11-17

    This article investigates the capacity problem of an integrated remote wireless sensor and satellite network (IWSSN) in emergency scenarios. We formulate a general model to evaluate the remote sensor and satellite network capacity. Compared to most existing works for ground networks, the proposed model is time varying and space oriented. To capture the characteristics of a practical network, we sift through major capacity-impacting constraints and analyze the influence of these constraints. Specifically, we combine the geometric satellite orbit model and satellite tool kit (STK) engineering software to quantify the trends of the capacity constraints. Our objective in analyzing these trends is to provide insights and design guidelines for optimizing the integrated remote wireless sensor and satellite network schedules. Simulation results validate the theoretical analysis of capacity trends and show the optimization opportunities of the IWSSN.

  10. Capacity Model and Constraints Analysis for Integrated Remote Wireless Sensor and Satellite Network in Emergency Scenarios

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2015-11-01

    Full Text Available This article investigates the capacity problem of an integrated remote wireless sensor and satellite network (IWSSN in emergency scenarios. We formulate a general model to evaluate the remote sensor and satellite network capacity. Compared to most existing works for ground networks, the proposed model is time varying and space oriented. To capture the characteristics of a practical network, we sift through major capacity-impacting constraints and analyze the influence of these constraints. Specifically, we combine the geometric satellite orbit model and satellite tool kit (STK engineering software to quantify the trends of the capacity constraints. Our objective in analyzing these trends is to provide insights and design guidelines for optimizing the integrated remote wireless sensor and satellite network schedules. Simulation results validate the theoretical analysis of capacity trends and show the optimization opportunities of the IWSSN.

  11. Aeronautical telecommunications network advances, challenges, and modeling

    CERN Document Server

    Musa, Sarhan M

    2015-01-01

    Addresses the Challenges of Modern-Day Air Traffic Air traffic control (ATC) directs aircraft in the sky and on the ground to safety, while the Aeronautical Telecommunications Network (ATN) comprises all systems and phases that assist in aircraft departure and landing. The Aeronautical Telecommunications Network: Advances, Challenges, and Modeling focuses on the development of ATN and examines the role of the various systems that link aircraft with the ground. The book places special emphasis on ATC-introducing the modern ATC system from the perspective of the user and the developer-and provides a thorough understanding of the operating mechanism of the ATC system. It discusses the evolution of ATC, explaining its structure and how it works; includes design examples; and describes all subsystems of the ATC system. In addition, the book covers relevant tools, techniques, protocols, and architectures in ATN, including MIPv6, air traffic control (ATC), security of air traffic management (ATM), very-high-frequenc...

  12. Neural Network Program Package for Prosody Modeling

    Directory of Open Access Journals (Sweden)

    J. Santarius

    2004-04-01

    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  13. Towards an evolutionary model of transcription networks.

    Directory of Open Access Journals (Sweden)

    Dan Xie

    2011-06-01

    Full Text Available DNA evolution models made invaluable contributions to comparative genomics, although it seemed formidable to include non-genomic features into these models. In order to build an evolutionary model of transcription networks (TNs, we had to forfeit the substitution model used in DNA evolution and to start from modeling the evolution of the regulatory relationships. We present a quantitative evolutionary model of TNs, subjecting the phylogenetic distance and the evolutionary changes of cis-regulatory sequence, gene expression and network structure to one probabilistic framework. Using the genome sequences and gene expression data from multiple species, this model can predict regulatory relationships between a transcription factor (TF and its target genes in all species, and thus identify TN re-wiring events. Applying this model to analyze the pre-implantation development of three mammalian species, we identified the conserved and re-wired components of the TNs downstream to a set of TFs including Oct4, Gata3/4/6, cMyc and nMyc. Evolutionary events on the DNA sequence that led to turnover of TF binding sites were identified, including a birth of an Oct4 binding site by a 2nt deletion. In contrast to recent reports of large interspecies differences of TF binding sites and gene expression patterns, the interspecies difference in TF-target relationship is much smaller. The data showed increasing conservation levels from genomic sequences to TF-DNA interaction, gene expression, TN, and finally to morphology, suggesting that evolutionary changes are larger at molecular levels and smaller at functional levels. The data also showed that evolutionarily older TFs are more likely to have conserved target genes, whereas younger TFs tend to have larger re-wiring rates.

  14. Contributions and challenges for network models in cognitive neuroscience.

    Science.gov (United States)

    Sporns, Olaf

    2014-05-01

    The confluence of new approaches in recording patterns of brain connectivity and quantitative analytic tools from network science has opened new avenues toward understanding the organization and function of brain networks. Descriptive network models of brain structural and functional connectivity have made several important contributions; for example, in the mapping of putative network hubs and network communities. Building on the importance of anatomical and functional interactions, network models have provided insight into the basic structures and mechanisms that enable integrative neural processes. Network models have also been instrumental in understanding the role of structural brain networks in generating spatially and temporally organized brain activity. Despite these contributions, network models are subject to limitations in methodology and interpretation, and they face many challenges as brain connectivity data sets continue to increase in detail and complexity.

  15. Modeling of regional warehouse network generation

    Directory of Open Access Journals (Sweden)

    Popov Pavel Vladimirovich

    2016-08-01

    Full Text Available One of the factors that has a significant impact on the socio-economic development of the Russian Federation’s regions is the logistics infrastructure. It provides integrated transportation and distribution service of material flows. One of the main elements of logistics infrastructure is a storage infrastructure, which includes distribution center, distribution-and-sortout and sortout warehouses. It is the most expedient to place distribution center in the vicinity of the regional center. One of the tasks of the distribution network creation within the regions of the Russian Federation is to determine the location, capacity and number of stores. When determining regional network location of general purpose warehouses methodological approaches to solving the problems of location of production and non-production can be used which depend on various economic factors. The mathematical models for solving relevant problems are the deployment models. However, the existing models focus on the dimensionless power storage. The purpose of the given work is to develop a model to determine the optimal location of general-purpose warehouses on the Russian Federation area. At the first stage of the work, the authors assess the main economic indicators influencing the choice of the location of general purpose warehouses. An algorithm for solving the first stage, based on ABC, discriminant and cluster analysis were proposed by the authors in earlier papers. At the second stage the specific locations of general purpose warehouses and their power is chosen to provide the cost minimization for the construction and subsequent maintenance of warehouses and transportation heterogeneous products. In order to solve this problem the authors developed a mathematical model that takes into account the possibility of delivery in heterogeneous goods from suppliers and manufacturers in the distribution and storage sorting with specified set of capacities. The model allows

  16. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  17. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  18. A Complex Network Approach to Distributional Semantic Models.

    Directory of Open Access Journals (Sweden)

    Akira Utsumi

    Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.

  19. Network modeling reveals prevalent negative regulatory relationships between signaling sectors in Arabidopsis immune signaling.

    Directory of Open Access Journals (Sweden)

    Masanao Sato

    Full Text Available Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2. This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i the components of the network are highly interconnected; and (ii negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a "sector

  20. Neural network model to control an experimental chaotic pendulum

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Takens, F; vandenBleek, CM

    1996-01-01

    A feedforward neural network was trained to predict the motion of an experimental, driven, and damped pendulum operating in a chaotic regime. The network learned the behavior of the pendulum from a time series of the pendulum's angle, the single measured variable. The validity of the neural

  1. Inferring gene regression networks with model trees

    Directory of Open Access Journals (Sweden)

    Aguilar-Ruiz Jesus S

    2010-10-01

    Full Text Available Abstract Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear

  2. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  3. An Introduction to Network Psychometrics: Relating Ising Network Models to Item Response Theory Models.

    Science.gov (United States)

    Marsman, M; Borsboom, D; Kruis, J; Epskamp, S; van Bork, R; Waldorp, L J; Maas, H L J van der; Maris, G

    2017-11-07

    In recent years, network models have been proposed as an alternative representation of psychometric constructs such as depression. In such models, the covariance between observables (e.g., symptoms like depressed mood, feelings of worthlessness, and guilt) is explained in terms of a pattern of causal interactions between these observables, which contrasts with classical interpretations in which the observables are conceptualized as the effects of a reflective latent variable. However, few investigations have been directed at the question how these different models relate to each other. To shed light on this issue, the current paper explores the relation between one of the most important network models-the Ising model from physics-and one of the most important latent variable models-the Item Response Theory (IRT) model from psychometrics. The Ising model describes the interaction between states of particles that are connected in a network, whereas the IRT model describes the probability distribution associated with item responses in a psychometric test as a function of a latent variable. Despite the divergent backgrounds of the models, we show a broad equivalence between them and also illustrate several opportunities that arise from this connection.

  4. Social network models predict movement and connectivity in ecological landscapes

    Science.gov (United States)

    Fletcher, Robert J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, Wiley M.

    2011-01-01

    Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.

  5. Neural Networks For Electrohydrodynamic Effect Modelling

    Directory of Open Access Journals (Sweden)

    Wiesław Wajs

    2004-01-01

    Full Text Available This paper presents currently achieved results concerning methods of electrohydrodynamiceffect used in geophysics simulated with feedforward networks trained with backpropagation algorithm, radial basis function networks and generalized regression networks.

  6. Validated QSAR prediction of OH tropospheric degradation of VOCs: splitting into training-test sets and consensus modeling.

    Science.gov (United States)

    Gramatica, Paola; Pilutti, Pamela; Papa, Ester

    2004-01-01

    The rate constant for hydroxyl radical tropospheric degradation of 460 heterogeneous organic compounds is predicted by QSAR modeling. The applied Multiple Linear Regression is based on a variety of theoretical molecular descriptors, selected by the Genetic Algorithms-Variable Subset Selection (GA-VSS) procedure. The models were validated for predictivity by both internal and external validation. For the external validation two splitting approaches, D-optimal Experimental Design and Kohonen Artificial Neural Networks (K-ANN), were applied to the original data set to compare the two methodologies. We emphasize that external validation is the only way to establish a reliable QSAR model for predictive purposes. Predicted data by consensus modeling from different models are also proposed. Copyright 2004 American Chemical Society

  7. Validation of spectral gas radiation models under oxyfuel conditions

    Energy Technology Data Exchange (ETDEWEB)

    Becher, Johann Valentin

    2013-05-15

    Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the

  8. A network-oriented business modeling environment

    Science.gov (United States)

    Bisconti, Cristian; Storelli, Davide; Totaro, Salvatore; Arigliano, Francesco; Savarino, Vincenzo; Vicari, Claudia

    The development of formal models related to the organizational aspects of an enterprise is fundamental when these aspects must be re-engineered and digitalized, especially when the enterprise is involved in the dynamics and value flows of a business network. Business modeling provides an opportunity to synthesize and make business processes, business rules and the structural aspects of an organization explicit, allowing business managers to control their complexity and guide an enterprise through effective decisional and strategic activities. This chapter discusses the main results of the TEKNE project in terms of software components that enable enterprises to configure, store, search and share models of any aspects of their business while leveraging standard and business-oriented technologies and languages to bridge the gap between the world of business people and IT experts and to foster effective business-to-business collaborations.

  9. MODEL-BASED VALIDATION AND VERIFICATION OF ANOMALIES IN LEGISLATION

    Directory of Open Access Journals (Sweden)

    Vjeran Strahonja

    2006-12-01

    Full Text Available An anomaly in legislation is absence of completeness, consistency and other desirable properties, caused by different semantic, syntactic or pragmatic reasons. In general, the detection of anomalies in legislation comprises validation and verification. The basic idea of research, as presented in this paper, is modelling legislation by capturing domain knowledge of legislation and specifying it in a generic way by using commonly agreed and understandable modelling concepts of the Unified Modelling Language (UML. Models of legislation enable to understand the system better, support the detection of anomalies and help to improve the quality of legislation by validation and verification. By implementing model-based approach, the object of validation and verification moves from legislation to its model. The business domain of legislation has two distinct aspects: a structural or static aspect (functionality, business data etc., and a behavioural or dynamic part (states, transitions, activities, sequences etc.. Because anomalism can occur on two different levels, on the level of a model, or on the level of legislation itself, a framework for validation and verification of legal regulation and its model is discussed. The presented framework includes some significant types of semantic and syntactic anomalies. Some ideas for assessment of pragmatic anomalies of models were found in the field of software quality metrics. Thus pragmatic features and attributes can be determined that could be relevant for evaluation purposes of models. Based on analogue standards for the evaluation of software, a qualitative and quantitative scale can be applied to determine the value of some feature for a specific model.

  10. Data for model validation summary report. A summary of data for validation and benchmarking of recovery boiler models

    Energy Technology Data Exchange (ETDEWEB)

    Grace, T.; Lien, S.; Schmidl, W.; Salcudean, M.; Abdullah, Z.

    1997-07-01

    One of the tasks in the project was to obtain data from operating recovery boilers for the purpose of model validation. Another task was to obtain water model data and computer output from University of British Columbia for purposes of benchmarking the UBC model against other codes. In the course of discussions on recovery boiler modeling over the course of this project, it became evident that there would be value in having some common cases for carrying out benchmarking exercises with different recovery boiler models. In order to facilitate such a benchmarking exercise, the data that was obtained on this project for validation and benchmarking purposes has been brought together in a single, separate report. The intent is to make this data available to anyone who may want to use it for model validation. The report contains data from three different cases. Case 1 is an ABBCE recovery boiler which was used for model validation. The data are for a single set of operating conditions. Case 2 is a Babcock & Wilcox recovery boiler that was modified by Tampella. In this data set, several different operating conditions were employed. The third case is water flow data supplied by UBC, along with computational output using the UBC code, for benchmarking purposes.

  11. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  12. Validation of GOME total ozone by means of the Norwegian ozone monitoring network

    Directory of Open Access Journals (Sweden)

    G. Hansen

    1999-03-01

    Full Text Available The Global Ozone Monitoring Experiment (GOME onboard the ERS-2 satellite has been in operation since July 1995. The Norwegian ground-based total ozone network has played an important role both in the main validation during the commissioning phase and in the validation of upgraded versions of the analysis algorithms of the instrument. The ground-based network consists of various spectrometer types (Dobson, Brewer, UV filter instruments. The validation of the second algorithm version used until January 1998 reveals a very good agreement between GOME and ground-based data at solar zenith angles <60° and deviations of GOME total ozone data from ground-based data of up to ±60 DU (~20% at zenith angles >60°. The deviations strongly depend on the season of the year, being negative in summer and positive in winter/spring, The deviations furthermore show a considerable scattering (up to ±25 DU in monthly average values of 5° SZA intervals, even in close spatial and temporal coincidence with ground-based measurements, especially in the high Arctic. The deviations are also dependent on the viewing geometry/ground pixel size with an additional negative offset for the large pixels used in the backswath mode and at solar zenith angles >85°, compared to forward-swath pixels.Key words. Atmospheric composition and structure (middle atmosphere · composition and chemistry; instruments and techniques

  13. Social networking addiction, attachment style, and validation of the Italian version of the Bergen Social Media Addiction Scale

    OpenAIRE

    Monacis, Lucia; de Palo, Valeria; Griffiths, Mark D.; Sinatra, Maria

    2017-01-01

    Aim: Research into social networking addiction has greatly increased over the last decade. However, the number of\\ud validated instruments assessing addiction to social networking sites (SNSs) remains few, and none have been\\ud validated in the Italian language. Consequently, this study tested the psychometric properties of the Italian version of\\ud the Bergen Social Media Addiction Scale (BSMAS), as well as providing empirical data concerning the relationship\\ud between attachment styles and...

  14. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    Science.gov (United States)

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  15. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    Science.gov (United States)

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  16. Topological Vulnerability Evaluation Model Based on Fractal Dimension of Complex Networks

    Science.gov (United States)

    Gou, Li; Wei, Bo; Sadiq, Rehan; Sadiq, Yong; Deng, Yong

    2016-01-01

    With an increasing emphasis on network security, much more attentions have been attracted to the vulnerability of complex networks. In this paper, the fractal dimension, which can reflect space-filling capacity of networks, is redefined as the origin moment of the edge betweenness to obtain a more reasonable evaluation of vulnerability. The proposed model combining multiple evaluation indexes not only overcomes the shortage of average edge betweenness’s failing to evaluate vulnerability of some special networks, but also characterizes the topological structure and highlights the space-filling capacity of networks. The applications to six US airline networks illustrate the practicality and effectiveness of our proposed method, and the comparisons with three other commonly used methods further validate the superiority of our proposed method. PMID:26751371

  17. Compartmentalization analysis using discrete fracture network models

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, P.R.; Eiben, T.; Dershowitz, W. [Golder Associates, Redmond, VA (United States); Wadleigh, E. [Marathon Oil Co., Midland, TX (United States)

    1997-08-01

    This paper illustrates how Discrete Fracture Network (DFN) technology can serve as a basis for the calculation of reservoir engineering parameters for the development of fractured reservoirs. It describes the development of quantitative techniques for defining the geometry and volume of structurally controlled compartments. These techniques are based on a combination of stochastic geometry, computational geometry, and graph the theory. The parameters addressed are compartment size, matrix block size and tributary drainage volume. The concept of DFN models is explained and methodologies to compute these parameters are demonstrated.

  18. Some queuing network models of computer systems

    Science.gov (United States)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  19. Networks model of the East Turkistan terrorism

    Science.gov (United States)

    Li, Ben-xian; Zhu, Jun-fang; Wang, Shun-guo

    2015-02-01

    The presence of the East Turkistan terrorist network in China can be traced back to the rebellions on the BAREN region in Xinjiang in April 1990. This article intends to research the East Turkistan networks in China and offer a panoramic view. The events, terrorists and their relationship are described using matrices. Then social network analysis is adopted to reveal the network type and the network structure characteristics. We also find the crucial terrorist leader. Ultimately, some results show that the East Turkistan network has big hub nodes and small shortest path, and that the network follows a pattern of small world network with hierarchical structure.

  20. Fundamentals of complex networks models, structures and dynamics

    CERN Document Server

    Chen, Guanrong; Li, Xiang

    2014-01-01

    Complex networks such as the Internet, WWW, transportationnetworks, power grids, biological neural networks, and scientificcooperation networks of all kinds provide challenges for futuretechnological development. In particular, advanced societies havebecome dependent on large infrastructural networks to an extentbeyond our capability to plan (modeling) and to operate (control).The recent spate of collapses in power grids and ongoing virusattacks on the Internet illustrate the need for knowledge aboutmodeling, analysis of behaviors, optimized planning and performancecontrol in such networks. F

  1. Stochastic simulation of HIV population dynamics through complex network modelling

    NARCIS (Netherlands)

    Sloot, P. M. A.; Ivanov, S. V.; Boukhanovsky, A. V.; van de Vijver, D. A. M. C.; Boucher, C. A. B.

    We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and

  2. A Search Model with a Quasi-Network

    DEFF Research Database (Denmark)

    Ejarque, Joao Miguel

    This paper adds a quasi-network to a search model of the labor market. Fitting the model to an average unemployment rate and to other moments in the data implies the presence of the network is not noticeable in the basic properties of the unemployment and job finding rates. However, the network c...

  3. Stochastic simulation of HIV population dynamics through complex network modelling

    NARCIS (Netherlands)

    Sloot, P.M.A.; Ivanov, S.V.; Boukhanovsky, A.V.; van de Vijver, D.A.M.C.; Boucher, C.A.B.

    2008-01-01

    We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and

  4. A study on the forecasting of daily stream flow using the multilayer neural networks model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung-Won [Colorado State University, Fort Collins, CO(United States)

    2000-10-31

    In this study, Neural Networks models were used to forecast daily stream flow at Jindong station of the Nakdong River basin. Neural Networks models consist of CASE 1(5-5-1) and CASE 2(5-5-5-1). The criteria which separates two models is the number of hidden layers. Each model has Fletcher-Reeves Conjugate Gradient BackPropagation(FR-CGBP) and Scaled Conjugate Gradient BackPropagation(SCGBP) algorithms, which are better than original BackPropagation(BP) in convergence of global error and training tolerance. The data which are available for model training and validation were composed of wet, average, dry, wet+average, wet+dry, average+dry and wet+average+dry year respectively. During model training, the optimal connection weights and biases were determined using each data set and the daily stream flow was calculated at the same time. Except for wet+dry year, the results of training were good conditions by statistical analysis of forecast errors. And, model validation was carried out using the connection weights and biases which were calculated from model training. The results of validation were satisfactory like those of training. Daily stream flow forecasting using Neural Networks models were compared with those forecasted by Multiple Regression Analysis Model(MRAM). Neural Networks models were displayed slightly better results than MRAM in this study. Thus, Neural Networks models have much advantage to provide a more systematic approach, reduce model parameters, and shorten the time spent in the model development. (author). 22 refs., 9 tabs., 7 figs.

  5. Predicting third molar surgery operative time: a validated model.

    Science.gov (United States)

    Susarla, Srinivas M; Dodson, Thomas B

    2013-01-01

    The purpose of the present study was to develop and validate a statistical model to predict third molar (M3) operative time. This was a prospective cohort study consisting of a sample of subjects presenting for M3 removal. The demographic, anatomic, and operative variables were recorded for each subject. Using an index sample of randomly selected subjects, a multiple linear regression model was generated to predict the operating time. A nonoverlapping group of randomly selected subjects (validation sample) was used to assess model accuracy. P≤.05 was considered significant. The sample was composed of 150 subjects (n) who had 450 (k) M3s removed. The index sample (n=100 subjects, k=313 M3s extracted) had a mean age of 25.4±10.0 years. The mean extraction time was 6.4±7.0 minutes. The multiple linear regression model included M3 location, Winter's classification, tooth morphology, number of teeth extracted, procedure type, and surgical experience (R2=0.58). No statistically significant differences were seen between the index sample and the validation sample (n=50, k=137) for any of the study variables. Compared with the index model, the β-coefficients of the validation model were similar in direction and magnitude for most variables. Compared with the observed extraction time for all teeth in the sample, the predicted extraction time was not significantly different (P=.16). Fair agreement was seen between the β-coefficients for our multiple models in the index and validation populations, with no significant difference in the predicted and observed operating times. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Validation of the Revised WAsP Park Model

    DEFF Research Database (Denmark)

    Rathmann, Ole Steen; Hansen, Brian Ohrbeck; Leon, J.P. Murcia

    The DTU Wind Energy wind-resource model WAsP contains a wind farm wake model Park (Park1). This Park model in has been revised, Park2, to improve prediction accuracy in large wind farms, based on sound physical and mathematical principles: consistent wake-modelling and perturbation theory for wake......-wake-interaction. Park2 has been validated and calibrated using a number of off-shore and on-shore wind farms. The calibration has resulted in recommended values for the wakeexpansion coefficients of the Park2 model....

  7. Time dependent mechanical modeling for polymers based on network theory

    Energy Technology Data Exchange (ETDEWEB)

    Billon, Noëlle [MINES ParisTech, PSL-Research University, CEMEF – Centre de mise en forme des matériaux, CNRS UMR 7635, CS 10207 rue Claude Daunesse 06904 Sophia Antipolis Cedex (France)

    2016-05-18

    Despite of a lot of attempts during recent years, complex mechanical behaviour of polymers remains incompletely modelled, making industrial design of structures under complex, cyclic and hard loadings not totally reliable. The non linear and dissipative viscoelastic, viscoplastic behaviour of those materials impose to take into account non linear and combined effects of mechanical and thermal phenomena. In this view, a visco-hyperelastic, viscoplastic model, based on network description of the material has recently been developed and designed in a complete thermodynamic frame in order to take into account those main thermo-mechanical couplings. Also, a way to account for coupled effects of strain-rate and temperature was suggested. First experimental validations conducted in the 1D limit on amorphous rubbery like PMMA in isothermal conditions led to pretty goods results. In this paper a more complete formalism is presented and validated in the case of a semi crystalline polymer, a PA66 and a PET (either amorphous or semi crystalline) are used. Protocol for identification of constitutive parameters is described. It is concluded that this new approach should be the route to accurately model thermo-mechanical behaviour of polymers using a reduced number of parameters of some physical meaning.

  8. Neural networks in high-performance liquid chromatography optimization : Response surface modeling

    NARCIS (Netherlands)

    Metting, H.J; Coenegracht, P.M J

    1996-01-01

    The usefulness of artificial neural networks for response surface modeling in HPLC optimization is compared with (non-)linear regression methods. The number of hidden nodes is optimized by a lateral inhibition method. Overfitting is controlled by cross-validation using the leave one out method

  9. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  10. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  11. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    CERN Document Server

    Apostolakis, J; Bagulya, A; Brown, J M C; Burkhardt, H; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Grichine, V; Guatelli, S; Incerti, S; Ivanchenko, V N; Jacquemier, J; Kadri, O; Maire, M; Pandola, L; Sawkey, D; Toshito, T; Urban, L; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  12. VEPCO network model reconciliation of LANL and MZA model data

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-15

    The LANL DC load flow model of the VEPCO transmission network shows 210 more substations than the AC load flow model produced by MZA utility Consultants. MZA was requested to determine the source of the difference. The AC load flow model used for this study utilizes 2 standard network algorithms (Decoupled or Newton). The solution time of each is affected by the number of substations. The more substations included, the longer the model will take to solve. In addition, the ability of the algorithms to converge to a solution is affected by line loadings and characteristics. Convergence is inhibited by numerous lightly loaded and electrically short lines. The MZA model reduces the total substations to 343 by creating equivalent loads and generation. Most of the omitted substations are lightly loaded and rated at 115 kV. The MZA model includes 16 substations not included in the LANL model. These represent new generation including Non-Utility Generator (NUG) sites, additional substations and an intertie (Wake, to CP and L). This report also contains data from the Italian State AC power flow model and the Duke Power Company AC flow model.

  13. A Model of Genetic Variation in Human Social Networks

    CERN Document Server

    Fowler, James H; Christakis, Nicholas A

    2008-01-01

    Social networks influence the evolution of cooperation and they exhibit strikingly systematic patterns across a wide range of human contexts. Both of these facts suggest that variation in the topological attributes of human social networks might have a genetic basis. While genetic variation accounts for a significant portion of the variation in many complex social behaviors, the heritability of egocentric social network attributes is unknown. Here we show that three of these attributes (in-degree, transitivity, and centrality) are heritable. We then develop a "mirror network" method to test extant network models and show that none accounts for observed genetic variation in human social networks. We propose an alternative "attract and introduce" model that generates significant heritability as well as other important network features, and we show that this model with two simple forms of heterogeneity is well suited to the modeling of real social networks in humans. These results suggest that natural selection ...

  14. Feature network models for proximity data : statistical inference, model selection, network representations and links with related models

    NARCIS (Netherlands)

    Frank, Laurence Emmanuelle

    2006-01-01

    Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor

  15. PageRank model of opinion formation on Ulam networks

    Science.gov (United States)

    Chakhmakhchyan, L.; Shepelyansky, D.

    2013-12-01

    We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.

  16. A soil moisture and temperature network for SMOS validation in Western Denmark

    DEFF Research Database (Denmark)

    Bircher, Simone; Skou, Niels; Jensen, K. H.

    2011-01-01

    SMOS pixel (44 × 44 km), which is representative of the land surface conditions of the catchment and with minimal impact from open water (2) arrangement of three network clusters along the precipitation gradient, and (3) distribution of the stations according to respective fractions of classes......The Soil Moisture and Ocean Salinity Mission (SMOS) acquires surface soil moisture data globally, and thus product validation for a range of climate and environmental conditions across continents is a crucial step. For this purpose, a soil moisture and temperature network of Decagon ECH2O 5TE...... representing the prevailing environmental conditions. Overall, measured moisture and temperature patterns could be related to the respective land cover and soil conditions. Texture-dependency of the 0–5 cm soil moisture measurements was demonstrated. Regional differences in 0–5 cm soil moisture, temperature...

  17. Hydrologic and water quality models: Use, calibration, and validation

    Science.gov (United States)

    This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...

  18. Hydrologic and water quality models: Key calibration and validation topics

    Science.gov (United States)

    As a continuation of efforts to provide a common background and platform for accordant development of calibration and validation (C/V) engineering practices, ASABE members worked to determine critical topics related to model C/V, perform a synthesis of the Moriasi et al. (2012) special collection of...

  19. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  20. Development and validation of stem volume models for Pinus kesiya ...

    African Journals Online (AJOL)

    80% of the data set) and validation (20% of the data set). The performance of the different models was evaluated using evaluation statistics: fit index (FI), root mean square error (RMSE), bias (E), absolute mean difference (AMD) and coefficient of ...

  1. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of

  2. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  3. A scale-free neural network for modelling neurogenesis

    Science.gov (United States)

    Perotti, Juan I.; Tamarit, Francisco A.; Cannas, Sergio A.

    2006-11-01

    In this work we introduce a neural network model for associative memory based on a diluted Hopfield model, which grows through a neurogenesis algorithm that guarantees that the final network is a small-world and scale-free one. We also analyze the storage capacity of the network and prove that its performance is larger than that measured in a randomly dilute network with the same connectivity.

  4. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  5. Marketing communications model for innovation networks

    Directory of Open Access Journals (Sweden)

    Tiago João Freitas Correia

    2015-10-01

    Full Text Available Innovation is an increasingly relevant concept for the success of any organization, but it also represents a set of internal and external considerations, barriers and challenges to overcome. Along the concept of innovation, new paradigms emerge such as open innovation and co-creation that are simultaneously innovation modifiers and intensifiers in organizations, promoting organizational openness and stakeholder integration within the value creation process. Innovation networks composed by a multiplicity of agents in co-creative work perform as innovation mechanisms to face the increasingly complexity of products, services and markets. Technology, especially the Internet, is an enabler of all process among organizations supported by co-creative platforms for innovation. The definition of marketing communication strategies that promote motivation and involvement of all stakeholders in synergic creation and external promotion is the central aspect of this research. The implementation of the projects is performed by participative workshops with stakeholders from Madan Parque through IDEAS(REVOLUTION methodology and the operational model LinkUp parameterized for the project. The project is divided into the first part, the theoretical framework, and the second part where a model is developed for the marketing communication strategies that appeal to the Madan Parque case study. Keywords: Marketing Communication; Open Innovation, Technology; Innovation Networks; Incubator; Co-Creation.

  6. Determining Application Runtimes Using Queueing Network Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Michael L. [Univ. of San Francisco, CA (United States)

    2006-12-14

    Determination of application times-to-solution for large-scale clustered computers continues to be a difficult problem in high-end computing, which will only become more challenging as multi-core consumer machines become more prevalent in the market. Both researchers and consumers of these multi-core systems desire reasonable estimates of how long their programs will take to run (time-to-solution, or TTS), and how many resources will be consumed in the execution. Currently there are few methods of determining these values, and those that do exist are either overly simplistic in their assumptions or require great amounts of effort to parameterize and understand. One previously untried method is queuing network modeling (QNM), which is easy to parameterize and solve, and produces results that typically fall within 10 to 30% of the actual TTS for our test cases. Using characteristics of the computer network (bandwidth, latency) and communication patterns (number of messages, message length, time spent in communication), the QNM model of the NAS-PB CG application was applied to MCR and ALC, supercomputers at LLNL, and the Keck Cluster at USF, with average errors of 2.41%, 3.61%, and -10.73%, respectively, compared to the actual TTS observed. While additional work is necessary to improve the predictive capabilities of QNM, current results show that QNM has a great deal of promise for determining application TTS for multi-processor computer systems.

  7. Data Visualization and Analysis Tools for the Global Precipitation Measurement (GPM) Validation Network

    Science.gov (United States)

    Morris, Kenneth R.; Schwaller, Mathew

    2010-01-01

    The Validation Network (VN) prototype for the Global Precipitation Measurement (GPM) Mission compares data from the Tropical Rainfall Measuring Mission (TRMM) satellite Precipitation Radar (PR) to similar measurements from U.S. and international operational weather radars. This prototype is a major component of the GPM Ground Validation System (GVS). The VN provides a means for the precipitation measurement community to identify and resolve significant discrepancies between the ground radar (GR) observations and similar satellite observations. The VN prototype is based on research results and computer code described by Anagnostou et al. (2001), Bolen and Chandrasekar (2000), and Liao et al. (2001), and has previously been described by Morris, et al. (2007). Morris and Schwaller (2009) describe the PR-GR volume-matching algorithm used to create the VN match-up data set used for the comparisons. This paper describes software tools that have been developed for visualization and statistical analysis of the original and volume matched PR and GR data.

  8. Modeling management of research and education networks

    NARCIS (Netherlands)

    Galagan, D.V.

    2004-01-01

    Computer networks and their services have become an essential part of research and education. Nowadays every modern R&E institution must have a computer network and provide network services to its students and staff. In addition to its internal computer network, every R&E institution must have a

  9. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the model is empirically validated and its' limitations for the DSF modeling are identified. Correspondingly, the existence and availability of the experimental data is very essential. Three sets of accurate empirical data for validation of DSF modeling with building simulation software were produced within...... of a double skin facade: 1. External air curtain mode, it is the naturally ventilated DSF cavity with the top and bottom openings open to the outdoor; 2. Thermal insulation mode, when all of the DSF openings closed; 3. Preheating mode, with the bottom DSF openings open to the outdoor and top openings open...

  10. Validating firn compaction model with remote sensing data

    DEFF Research Database (Denmark)

    Simonsen, S. B.; Stenseng, Lars; Sørensen, Louise Sandberg

    A comprehensive understanding of firn processes is of outmost importance, when estimating present and future changes of the Greenland Ice Sheet. Especially, when remote sensing altimetry is used to assess the state of ice sheets and their contribution to global sea level rise, firn compaction...... models have been shown to be a key component. Now, remote sensing data can also be used to validate the firn models. Radar penetrating the upper part of the firn column in the interior part of Greenland shows a clear layering. The observed layers from the radar data can be used as an in-situ validation...... correction relative to the changes in the elevation of the surface observed with remote sensing altimetry? What model time resolution is necessary to resolved the observed layering? What model refinements are necessary to give better estimates of the surface mass balance of the Greenland ice sheet from...

  11. Modeling stochasticity in biochemical reaction networks

    Science.gov (United States)

    Constantino, P. H.; Vlysidis, M.; Smadbeck, P.; Kaznessis, Y. N.

    2016-03-01

    Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts.

  12. Modelling of A Trust and Reputation Model in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Saurabh Mishra

    2015-09-01

    Full Text Available Security is the major challenge for Wireless Sensor Networks (WSNs. The sensor nodes are deployed in non controlled environment, facing the danger of information leakage, adversary attacks and other threats. Trust and Reputation models are solutions for this problem and to identify malicious, selfish and compromised nodes. This paper aims to evaluate varying collusion effect with respect to static (SW, dynamic (DW, static with collusion (SWC, dynamic with collusion (DWC and oscillating wireless sensor networks to derive the joint resultant of Eigen Trust Model. An attempt has been made for the same by comparing aforementioned networks that are purely dedicated to protect the WSNs from adversary attacks and maintain the security issues. The comparison has been made with respect to accuracy and path length and founded that, collusion for wireless sensor networks seems intractable with the static and dynamic WSNs when varied with specified number of fraudulent nodes in the scenario. Additionally, it consumes more energy and resources in oscillating and collusive environments.

  13. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    validation approach and extended it to solve a multivariate output problem [6]. Arendt et al. applied the Bayesian approach for both single and...32), pp. 2431-2441. [7] Arendt , P., Apley, D., and Chen, W., 2012, "Quantification of Model Uncertainty: Calibration, Model Discrepancy, and...Identifiability," Journal of Mechanical Design, 134(10). [8] Arendt , P., Apley, D., Chen, W., Lamb, D., and Gorsich, D., 2012, "Improving Identifiability

  14. Multiplicative Attribute Graph Model of Real-World Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)

    2010-10-20

    Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.

  15. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  16. A Framework for the Estimation and Validation of Energy Consumption in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Alexandros Karagiannis

    2015-01-01

    Full Text Available Body sensor networks and implantable and ingestible medical devices energy efficiency is a substantial key factor in network lifetime and functionality. This work confronts the nodes’ energy problem by establishing a unified energy consumption framework comprised of theoretical model, energy simulator model, and electronic metering modules that can be attached to the nodes. A theoretical analysis, a simulation procedure, and the design and development of three prototype electronic metering modules are presented in this paper. We discuss the accuracy of the proposed techniques, towards a unified framework for the a priori estimation of the energy consumption in commercial sensor nodes, taking into account the application functionality and the energy properties of the incorporated electronics. Moreover, body network nodes are considered for the application and the measurements of the proposed framework.

  17. Validation of Fatigue Modeling Predictions in Aviation Operations

    Science.gov (United States)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  18. CNMO: Towards the Construction of a Communication Network Modelling Ontology

    Science.gov (United States)

    Rahman, Muhammad Azizur; Pakstas, Algirdas; Wang, Frank Zhigang

    Ontologies that explicitly identify objects, properties, and relationships in specific domains are essential for collaboration that involves sharing of data, knowledge or resources. A communications network modelling ontology (CNMO) has been designed to represent a network model as well as aspects related to its development and actual network operation. Network nodes/sites, link, traffic sources, protocols as well as aspects of the modeling/simulation scenario and operational aspects are defined with their formal representation. A CNMO may be beneficial for various network design/simulation/research communities due to the uniform representation of network models. This ontology is designed using terminology and concepts from various network modeling, simulation and topology generation tools.

  19. Topological evolution of virtual social networks by modeling social activities

    Science.gov (United States)

    Sun, Xin; Dong, Junyu; Tang, Ruichun; Xu, Mantao; Qi, Lin; Cai, Yang

    2015-09-01

    With the development of Internet and wireless communication, virtual social networks are becoming increasingly important in the formation of nowadays' social communities. Topological evolution model is foundational and critical for social network related researches. Up to present most of the related research experiments are carried out on artificial networks, however, a study of incorporating the actual social activities into the network topology model is ignored. This paper first formalizes two mathematical abstract concepts of hobbies search and friend recommendation to model the social actions people exhibit. Then a social activities based topology evolution simulation model is developed to satisfy some well-known properties that have been discovered in real-world social networks. Empirical results show that the proposed topology evolution model has embraced several key network topological properties of concern, which can be envisioned as signatures of real social networks.

  20. Validation of models with constant bias: an applied approach

    Directory of Open Access Journals (Sweden)

    Salvador Medina-Peralta

    2014-06-01

    Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.

  1. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks

    Science.gov (United States)

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  2. HELOKA-HP thermal-hydraulic model validation and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Xue Zhou; Ghidersa, Bradut-Eugen; Badea, Aurelian Florin

    2016-11-01

    Highlights: • The electrical heater in HELOKA-HP has been modeled with RELAP5-3D using experimental data as input. • The model has been validated using novel techniques for assimilating experimental data and the representative model parameters with BEST-EST. • The methodology is successfully used for reducing the model uncertainties and provides a quantitative measure of the consistency between the experimental data and the model. - Abstract: The Helium Loop Karlsruhe High Pressure (HELOKA-HP) is an experimental facility for the testing of various helium-cooled components at high temperature (500 °C) and high pressure (8 MPa) for nuclear fusion applications. For modeling the loop thermal dynamics, a thermal-hydraulic model has been created using the system code RELAP5-3D. Recently, new experimental data covering the behavior of the loop components under relevant operational conditions have been made available giving the possibility of validating and calibrating the existing models in order to reduce the uncertainties of the simulated responses. This paper presents an example where such process has been applied for the HELOKA electrical heater model. Using novel techniques for assimilating experimental data, implemented in the computational module BEST-EST, the representative parameters of the model have been calibrated.

  3. Vehicle Scheduling with Network Flow Models

    Directory of Open Access Journals (Sweden)

    Gustavo P. Silva

    2010-04-01

    Full Text Available

    Este trabalho retrata a primeira fase de uma pesquisa de doutorado voltada para a utilização de modelos de fluxo em redes para programação de veículos (de ônibus, em particular. A utilização de modelos deste tipo ainda e muito pouco explorada na literatura, principalmente pela dificuldade imposta pelo grande numero de variáveis resultante. Neste trabalho são apresentadas formulações para tratamento do problema de programação de veículos associados a um único depósito (ou garagem como problema de fluxo em redes, incluindo duas técnicas para reduzir o numero de arcos na rede criada e, conseqüentemente, o numero de variáveis a tratar. Uma destas técnicas de redução de arcos foi implementada e o problema de fluxo resultante foi direcionado para ser resolvido, nesta fase da pesquisa, por uma versão disponível do algoritmo Simplex para redes. Problemas teste baseados em dados reais da cidade de Reading, UK, foram resolvidos com a utilização da formulação de fluxo em redes adotada, e os resultados comparados com aqueles obtidos pelo método heurístico BOOST, o qual tem sido largamente testado e comercializado pela School of Computer Studies da Universidade de Leeds, UK. Os resultados alcançados demonstram a possibilidade de tratamento de problemas reais com a técnica de redução de arcos.

    ABSTRACT

    This paper presents the successful results of a first phase of a doctoral research addressed to solving vehicle (bus, in particular scheduling problems through network flow formulations. Network flow modeling for this kind of problem is a promising, but not a well explored approach, mainly because of the large number of variables related to number of arcs of real case networks. The paper presents and discusses some network flow formulations for the single depot bus vehicle scheduling problem, along with two techniques of arc reduction. One of these arc reduction techniques has been implemented and the underlying

  4. Bicriteria Models of Vehicles Recycling Network Facility Location

    Science.gov (United States)

    Merkisz-Guranowska, Agnieszka

    2012-06-01

    The paper presents the issues related to modeling of a vehicle recycling network. The functioning of the recycling network is within the realm of interest of a variety of government agendas, companies participating in the network, vehicle manufacturers and vehicle end users. The interests of these groups need to be considered when deciding about the network organization. The paper presents bicriteria models of network entity location that take into account the preferences of the vehicle owners and network participants related to the network construction and reorganization. A mathematical formulation of the optimization tasks has been presented including the objective functions and limitations that the solutions have to comply with. Then, the models were used for the network optimization in Poland.

  5. Models as Tools of Analysis of a Network Organisation

    Directory of Open Access Journals (Sweden)

    Wojciech Pająk

    2013-06-01

    Full Text Available The paper presents models which may be applied as tools of analysis of a network organisation. The starting point of the discussion is defining the following terms: supply chain and network organisation. Further parts of the paper present basic assumptions analysis of a network organisation. Then the study characterises the best known models utilised in analysis of a network organisation. The purpose of the article is to define the notion and the essence of network organizations and to present the models used for their analysis.

  6. FDA 2011 process validation guidance: lifecycle compliance model.

    Science.gov (United States)

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  7. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    Science.gov (United States)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  8. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns.

    Directory of Open Access Journals (Sweden)

    Guillaume Chérel

    Full Text Available Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model's predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic.

  9. Experimental validation of mathematical model for small air compressor

    Directory of Open Access Journals (Sweden)

    Tuhovčák Ján

    2017-01-01

    Full Text Available Development process of reciprocating compressors can be simplified by using simulation tools. Modelling of a compressor requires a trade-off between computational effort and accuracy of desired results. This paper presents experimental validation of the simulation tool, which can be used to predict compressor behaviour under different working conditions. The mathematical model provides fast results with very good accuracy, however the model must be calibrated for a certain type of compressor. Small air compressor was used to validate an in-house simulation tool, which is based on mass and energy conservation in a control volume. The simulation tool calculates pressure and temperature history inside the cylinder, valve characteristics, mass flow and heat losses during the cycle of the compressor. A test bench for the compressor consisted of pressure sensors on both discharge and suction side, temperature sensor on discharge side and flow meter with calorimetric principle sensor.

  10. Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries.

    Science.gov (United States)

    Jochems, Arthur; Deist, Timo M; El Naqa, Issam; Kessler, Marc; Mayo, Chuck; Reeves, Jackson; Jolly, Shruti; Matuszak, Martha; Ten Haken, Randall; van Soest, Johan; Oberije, Cary; Faivre-Finn, Corinne; Price, Gareth; de Ruysscher, Dirk; Lambin, Philippe; Dekker, Andre

    2017-10-01

    Tools for survival prediction for non-small cell lung cancer (NSCLC) patients treated with chemoradiation or radiation therapy are of limited quality. In this work, we developed a predictive model of survival at 2 years. The model is based on a large volume of historical patient data and serves as a proof of concept to demonstrate the distributed learning approach. Clinical data from 698 lung cancer patients, treated with curative intent with chemoradiation or radiation therapy alone, were collected and stored at 2 different cancer institutes (559 patients at Maastro clinic (Netherlands) and 139 at Michigan university [United States]). The model was further validated on 196 patients originating from The Christie (United Kingdon). A Bayesian network model was adapted for distributed learning (the animation can be viewed at https://www.youtube.com/watch?v=ZDJFOxpwqEA). Two-year posttreatment survival was chosen as the endpoint. The Maastro clinic cohort data are publicly available at https://www.cancerdata.org/publication/developing-and-validating-survival-prediction-model-nsclc-patients-through-distributed, and the developed models can be found at www.predictcancer.org. Variables included in the final model were T and N category, age, performance status, and total tumor dose. The model has an area under the curve (AUC) of 0.66 on the external validation set and an AUC of 0.62 on a 5-fold cross validation. A model based on the T and N category performed with an AUC of 0.47 on the validation set, significantly worse than our model (P<.001). Learning the model in a centralized or distributed fashion yields a minor difference on the probabilities of the conditional probability tables (0.6%); the discriminative performance of the models on the validation set is similar (P=.26). Distributed learning from federated databases allows learning of predictive models on data originating from multiple institutions while avoiding many of the data-sharing barriers. We believe

  11. Validation of a parametric finite element human femur model.

    Science.gov (United States)

    Klein, Katelyn F; Hu, Jingwen; Reed, Matthew P; Schneider, Lawrence W; Rupp, Jonathan D

    2017-05-19

    Finite element (FE) models with geometry and material properties that are parametric with subject descriptors, such as age and body shape/size, are being developed to incorporate population variability into crash simulations. However, the validation methods currently being used with these parametric models do not assess whether model predictions are reasonable in the space over which the model is intended to be used. This study presents a parametric model of the femur and applies a unique validation paradigm to this parametric femur model that characterizes whether model predictions reproduce experimentally observed trends. FE models of male and female femurs with geometries that are parametric with age, femur length, and body mass index (BMI) were developed based on existing statistical models that predict femur geometry. These parametric FE femur models were validated by comparing responses from combined loading tests of femoral shafts to simulation results from FE models of the corresponding femoral shafts whose geometry was predicted using the associated age, femur length, and BMI. The effects of subject variables on model responses were also compared with trends in the experimental data set by fitting similarly parameterized statistical models to both the results of the experimental data and the corresponding FE model results and then comparing fitted model coefficients for the experimental and predicted data sets. The average error in impact force at experimental failure for the parametric models was 5%. The coefficients of a statistical model fit to simulation data were within one standard error of the coefficients of a similarly parameterized model of the experimental data except for the age parameter, likely because material properties used in simulations were not varied with specimen age. In simulations to explore the effects of femur length, BMI, and age on impact response, only BMI significantly affected response for both men and women, with increasing

  12. Natural Models for Evolution on Networks

    CERN Document Server

    Mertzios, George B; Raptopoulos, Christoforos; Spirakis, Paul G

    2011-01-01

    Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described my the Moran process. Recently, this approach has been generalized in \\cite{LHN} by arranging individuals on the nodes of a network. Undirected networks seem to have a smoother behavior than directed ones, and thus it is more challenging to find suppressors/amplifiers of selection. In this paper we present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in \\cite{LHN}, where all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new m...

  13. Predicting the ungauged basin: model validation and realism assessment

    Science.gov (United States)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2016-04-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment

  14. A Temperature-Dependent Battery Model for Wireless Sensor Networks.

    Science.gov (United States)

    Rodrigues, Leonardo M; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-02-22

    Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments.

  15. A Temperature-Dependent Battery Model for Wireless Sensor Networks

    Science.gov (United States)

    Rodrigues, Leonardo M.; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-01-01

    Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments. PMID:28241444

  16. Validating a Social Model Wargame: An Analysis of the Green Country Model

    Science.gov (United States)

    2012-12-01

    validating them. One recent effort at validation comes from Marlin (2009), who used the Peace Support Operations Model PSOM model as a starting point...Nichiporuk, B. (2009), Assessing irregular warfare: A framework for intelligence analysis, Santa Monica, CA: Rand Publishing. Marlin , B. J. (2009

  17. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  18. Validation of Advanced EM Models for UXO Discrimination

    CERN Document Server

    Weichman, Peter B

    2012-01-01

    The work reported here details basic validation of our advanced physics-based EMI forward and inverse models against data collected by the NRL TEMTADS system. The data was collected under laboratory-type conditions using both artificial spheroidal targets and real UXO. The artificial target models are essentially exact, and enable detailed comparison of theory and data in support of measurement platform characterization and target identification. Real UXO targets cannot be treated exactly, but it is demonstrated that quantitative comparisons of the data with the spheroid models nevertheless aids in extracting key target discrimination information, such as target geometry and hollow target shell thickness.

  19. Verifying and Validating Proposed Models for FSW Process Optimization

    Science.gov (United States)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  20. Dynamic Modeling of Wind Turbine Gearboxes and Experimental Validation

    DEFF Research Database (Denmark)

    Pedersen, Rune

    is presented. The model takes into account the effects of load and applied grinding corrections. The results are verified by comparing to simulated and experimental results reported in the existing literature. Using gear data loosely based on a 1 MW wind turbine gearbox, the gear mesh stiffness is expanded...... analysis in relation to gear dynamics. A multibody model of two complete 2.3MWwind turbine gearboxes mounted back-to-back in a test rig is built. The mean values of the proposed gear mesh stiffnesses are included. The model is validated by comparing with calculated and measured eigenfrequencies and mode...