WorldWideScience

Sample records for network methodology applied

  1. Methodology applied to develop the DHIE: applied methodology

    CSIR Research Space (South Africa)

    Herselman, Marlien

    2016-12-01

    Full Text Available This section will address the methodology that was applied to develop the South African Digital Health Innovation Ecosystem (DHIE). Each chapter under Section B represents a specific phase in the methodology....

  2. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  3. Bridging Minds: A Mixed Methodology to Assess Networked Flow.

    Science.gov (United States)

    Galimberti, Carlo; Chirico, Alice; Brivio, Eleonora; Mazzoni, Elvis; Riva, Giuseppe; Milani, Luca; Gaggioli, Andrea

    2015-01-01

    The main goal of this contribution is to present a methodological framework to study Networked Flow, a bio-psycho-social theory of collective creativity applying it on creative processes occurring via a computer network. First, we draw on the definition of Networked Flow to identify the key methodological requirements of this model. Next, we present the rationale of a mixed methodology, which aims at combining qualitative, quantitative and structural analysis of group dynamics to obtain a rich longitudinal dataset. We argue that this integrated strategy holds potential for describing the complex dynamics of creative collaboration, by linking the experiential features of collaborative experience (flow, social presence), with the structural features of collaboration dynamics (network indexes) and the collaboration outcome (the creative product). Finally, we report on our experience with using this methodology in blended collaboration settings (including both face-to-face and virtual meetings), to identify open issues and provide future research directions.

  4. Corporate Social Networks Applied in the Classroom

    Directory of Open Access Journals (Sweden)

    Hugo de Juan-Jordán

    2016-10-01

    This study also tries to propose some guidelines and best practices obtained as a result of the experience of use and the adoption of social networks in class in order to improve the learning process and innovate in the methodology applied to education.

  5. Social network analysis applied to team sports analysis

    CERN Document Server

    Clemente, Filipe Manuel; Mendes, Rui Sousa

    2016-01-01

    Explaining how graph theory and social network analysis can be applied to team sports analysis, This book presents useful approaches, models and methods that can be used to characterise the overall properties of team networks and identify the prominence of each team player. Exploring the different possible network metrics that can be utilised in sports analysis, their possible applications and variances from situation to situation, the respective chapters present an array of illustrative case studies. Identifying the general concepts of social network analysis and network centrality metrics, readers are shown how to generate a methodological protocol for data collection. As such, the book provides a valuable resource for students of the sport sciences, sports engineering, applied computation and the social sciences.

  6. Actor/Actant-Network Theory as Emerging Methodology for ...

    African Journals Online (AJOL)

    This paper deliberates on actor/actant-network theory (AANT) as methodology for policy research in environmental education (EE). Insights are drawn from work that applied AANT to research environmental policy processes surrounding the formulation and implementation of South Africa's Plastic Bags Regulations of 2003.

  7. Applying a Network-Lens to Hospitality Business Research: A New Research Agenda

    Directory of Open Access Journals (Sweden)

    Florian AUBKE

    2014-06-01

    Full Text Available Hospitality businesses are first and foremost places of social interaction. This paper argues for an inclusion of network methodology into the tool kit of hospitality researchers. This methodology focuses on the interaction of people rather than applying an actor-focused view, which currently seems dominant in hospitality research. Outside the field, a solid research basis has been formed, upon which hospitality researchers can build. The paper introduces the foundations of network theory and its applicability to the study of organizations. A brief methodological introduction is provided and potential applications and research topics relevant to the hospitality field are suggested.

  8. Networks as integrated in research methodologies in PER

    DEFF Research Database (Denmark)

    Bruun, Jesper

    2016-01-01

    of using networks to create insightful maps of learning discussions. To conclude, I argue that conceptual blending is a powerful framework for constructing "mixed methods" methodologies that may integrate diverse theories and other methodologies with network methodologies.......In recent years a number of researchers within the PER community have started using network analysis as a new methodology to extend our understanding of teaching and learning physics by viewing these as complex systems. In this paper, I give examples of social, cognitive, and action mapping...... networks and how they can be analyzed. In so doing I show how a network can be methodologically described as a set of relations between a set of entities, and how a network can be characterized and analyzed as a mathematical object. Then, as an illustrative example, I discuss a relatively new example...

  9. A multi-criteria decision aid methodology to design electric vehicles public charging networks

    Directory of Open Access Journals (Sweden)

    João Raposo

    2015-05-01

    Full Text Available This article presents a new multi-criteria decision aid methodology, dynamic-PROMETHEE, here used to design electric vehicle charging networks. In applying this methodology to a Portuguese city, results suggest that it is effective in designing electric vehicle charging networks, generating time and policy based scenarios, considering offer and demand and the city’s urban structure. Dynamic-PROMETHE adds to the already known PROMETHEE’s characteristics other useful features, such as decision memory over time, versatility and adaptability. The case study, used here to present the dynamic-PROMETHEE, served as inspiration and base to create this new methodology. It can be used to model different problems and scenarios that may present similar requirement characteristics.

  10. A multi-criteria decision aid methodology to design electric vehicles public charging networks

    Science.gov (United States)

    Raposo, João; Rodrigues, Ana; Silva, Carlos; Dentinho, Tomaz

    2015-05-01

    This article presents a new multi-criteria decision aid methodology, dynamic-PROMETHEE, here used to design electric vehicle charging networks. In applying this methodology to a Portuguese city, results suggest that it is effective in designing electric vehicle charging networks, generating time and policy based scenarios, considering offer and demand and the city's urban structure. Dynamic-PROMETHE adds to the already known PROMETHEE's characteristics other useful features, such as decision memory over time, versatility and adaptability. The case study, used here to present the dynamic-PROMETHEE, served as inspiration and base to create this new methodology. It can be used to model different problems and scenarios that may present similar requirement characteristics.

  11. Applying a Network-Lens to Hospitality Business Research: A New Research Agenda

    OpenAIRE

    AUBKE, Florian

    2014-01-01

    Hospitality businesses are first and foremost places of social interaction. This paper argues for an inclusion of network methodology into the tool kit of hospitality researchers. This methodology focuses on the interaction of people rather than applying an actor-focused view, which currently seems dominant in hospitality research. Outside the field, a solid research basis has been formed, upon which hospitality researchers can build. The paper introduces the foundations ...

  12. A neural network based methodology to predict site-specific spectral acceleration values

    Science.gov (United States)

    Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.

    2010-12-01

    A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.

  13. GMDH and neural networks applied in temperature sensors monitoring

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez; Silva, Antonio Teixeira e

    2009-01-01

    In this work a monitoring system was developed based on the Group Method of Data Handling (GMDH) and Neural Networks (ANNs) methodologies. This methodology was applied to the IEA-R1 research reactor at IPEN by using a database obtained from a theoretical model of the reactor. The IEA-R1 research reactor is a pool type reactor of 5 MW, cooled and moderated by light water, and uses graphite and beryllium as reflector. The theoretical model was developed using the Matlab GUIDE toolbox. The equations are based in the IEA-R1 mass and energy inventory balance and physical as well as operational aspects are taken into consideration. This methodology was developed by using the GMDH algorithm as input variables to the ANNs. The results obtained using the GMDH and ANNs were better than that obtained using only ANNs. (author)

  14. Investigating DMOs through the Lens of Social Network Analysis: Theoretical Gaps, Methodological Challenges and Practitioner Perspectives

    Directory of Open Access Journals (Sweden)

    Dean HRISTOV

    2015-06-01

    Full Text Available The extant literature on networks in tourism management research has traditionally acknowledged destinations as the primary unit of analysis. This paper takes an alternative perspective and positions Destination Management Organisations (DMOs at the forefront of today’s tourism management research agenda. Whilst providing a relatively structured approach to generating enquiry, network research vis-à-vis Social Network Analysis (SNA in DMOs is often surrounded by serious impediments. Embedded in the network literature, this conceptual article aims to provide a practitioner perspective on addressing the obstacles to undertaking network studies in DMO organisations. A simple, three-step methodological framework for investigating DMOs as interorganisational networks of member organisations is proposed in response to complexities in network research. The rationale behind introducing such framework lies in the opportunity to trigger discussions and encourage further academic contributions embedded in both theory and practice. Academic and practitioner contributions are likely to yield insights into the importance of network methodologies applied to DMO organisations.

  15. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  16. A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.

    Science.gov (United States)

    Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing

    2015-01-01

    This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.

  17. Toward methodological emancipation in applied health research.

    Science.gov (United States)

    Thorne, Sally

    2011-04-01

    In this article, I trace the historical groundings of what have become methodological conventions in the use of qualitative approaches to answer questions arising from the applied health disciplines and advocate an alternative logic more strategically grounded in the epistemological orientations of the professional health disciplines. I argue for an increasing emphasis on the modification of conventional qualitative approaches to the particular knowledge demands of the applied practice domain, challenging the merits of what may have become unwarranted attachment to theorizing. Reorienting our methodological toolkits toward the questions arising within an evidence-dominated policy agenda, I encourage my applied health disciplinary colleagues to make themselves useful to that larger project by illuminating that which quantitative research renders invisible, problematizing the assumptions on which it generates conclusions, and filling in the gaps in knowledge needed to make decisions on behalf of people and populations.

  18. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  19. A guide to distribution network operator connection and use of system methodologies and charging

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-05-04

    This report aims to help those developing or planning to develop power generation schemes connected to local electricity distribution systems (distributed generation) to understand the various regional network charging schemes in the UK. It is also intended to act as a route map to understand distribution charges as they are currently applied; further changes in charging arrangements between 2005 and 2010 are indicated and signposts to sources of further information are provided. The report explains the structure of distribution networks, the outcome of the regulatory review of distribution pricing undertaken by the Office of Gas and Electricity Markets (Ofgem) applicable from 1 April 2005 and how this affects distribution network operators (DNOs) and their distribution charges. The report considers: the energy policy framework in the UK; the commercial and regulatory framework that applies to distributed generators; DNOs and their regulatory framework; network charging methodologies and principles; charging statements; and areas of likely future changes. Individual schedules and contact details are given in an appendix for each DNO.

  20. A Generic Methodology for Superstructure Optimization of Different Processing Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Frauzem, Rebecca; Zhang, Lei

    2016-01-01

    In this paper, we propose a generic computer-aided methodology for synthesis of different processing networks using superstructure optimization. The methodology can handle different network optimization problems of various application fields. It integrates databases with a common data architecture......, a generic model to represent the processing steps, and appropriate optimization tools. A special software interface has been created to automate the steps in the methodology workflow, allow the transfer of data between tools and obtain the mathematical representation of the problem as required...

  1. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    Science.gov (United States)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  2. Abductive networks applied to electronic combat

    Science.gov (United States)

    Montgomery, Gerard J.; Hess, Paul; Hwang, Jong S.

    1990-08-01

    A practical approach to dealing with combinatorial decision problems and uncertainties associated with electronic combat through the use of networks of high-level functional elements called abductive networks is presented. It describes the application of the Abductory Induction Mechanism (AIMTM) a supervised inductive learning tool for synthesizing polynomial abductive networks to the electronic combat problem domain. From databases of historical expert-generated or simulated combat engagements AIM can often induce compact and robust network models for making effective real-time electronic combat decisions despite significant uncertainties or a combinatorial explosion of possible situations. The feasibility of applying abductive networks to realize advanced combat decision aiding capabilities was demonstrated by applying AIM to a set of electronic combat simulations. The networks synthesized by AIM generated accurate assessments of the intent lethality and overall risk associated with a variety of simulated threats and produced reasonable estimates of the expected effectiveness of a group of electronic countermeasures for a large number of simulated combat scenarios. This paper presents the application of abductive networks to electronic combat summarizes the results of experiments performed using AIM discusses the benefits and limitations of applying abductive networks to electronic combat and indicates why abductive networks can often result in capabilities not attainable using alternative approaches. 1. ELECTRONIC COMBAT. UNCERTAINTY. AND MACHINE LEARNING Electronic combat has become an essential part of the ability to make war and has become increasingly complex since

  3. A Methodology for Physical Interconnection Decisions of Next Generation Transport Networks

    DEFF Research Database (Denmark)

    Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir; Madsen, Ole Brun

    2011-01-01

    of possibilities when designing the physical network interconnection. This paper develops and presents a methodology in order to deal with aspects related to the interconnection problem of optical transport networks. This methodology is presented as independent puzzle pieces, covering diverse topics going from......The physical interconnection for optical transport networks has critical relevance in the overall network performance and deployment costs. As telecommunication services and technologies evolve, the provisioning of higher capacity and reliability levels is becoming essential for the proper...... development of Next Generation Networks. Currently, there is a lack of specific procedures that describe the basic guidelines to design such networks better than "best possible performance for the lowest investment". Therefore, the research from different points of view will allow a broader space...

  4. System-Level Design Methodologies for Networked Multiprocessor Systems-on-Chip

    DEFF Research Database (Denmark)

    Virk, Kashif Munir

    2008-01-01

    is the first such attempt in the published literature. The second part of the thesis deals with the issues related to the development of system-level design methodologies for networked multiprocessor systems-on-chip at various levels of design abstraction with special focus on the modeling and design...... at the system-level. The multiprocessor modeling framework is then extended to include models of networked multiprocessor systems-on-chip which is then employed to model wireless sensor networks both at the sensor node level as well as the wireless network level. In the third and the final part, the thesis...... to the transaction-level model. The thesis, as a whole makes contributions by describing a design methodology for networked multiprocessor embedded systems at three layers of abstraction from system-level through transaction-level to the cycle accurate level as well as demonstrating it practically by implementing...

  5. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  6. Methodology for neural networks prototyping. Application to traffic control

    Energy Technology Data Exchange (ETDEWEB)

    Belegan, I.C.

    1998-07-01

    The work described in this report was carried out in the context of the European project ASTORIA (Advanced Simulation Toolbox for Real-World Industrial Application in Passenger Management and Adaptive Control), and concerns the development of an advanced toolbox for complex transportation systems. Our work was focused on the methodology for prototyping a set of neural networks corresponding to specific strategies for traffic control and congestion management. The tool used for prototyping is SNNS (Stuggart Neural Network Simulator), developed at the University of Stuggart, Institute for Parallel and Distributed High Performance Systems, and the real data from the field were provided by ZELT. This report is structured into six parts. The introduction gives some insights about traffic control and its approaches. The second chapter discusses the various control strategies existing. The third chapter is an introduction to the field of neural networks. The data analysis and pre-processing is described in the fourth chapter. In the fifth chapter, the methodology for prototyping the neural networks is presented. Finally, conclusions and further work are presented. (author) 14 refs.

  7. INTERPERSONAL COMMUNICATION AND METHODOLOGIES OF INNOVATION. A HEURISTIC EXPERIENCE IN THE CLASSROOM APPLYING SEMANTIC NETWORKS

    Directory of Open Access Journals (Sweden)

    José Manuel Corujeira Gómez

    2014-10-01

    Full Text Available The current definition of creativity gives importance to interpersonal communication in innovation strategies, and allows us to question the profiles of professionals –innovation partners– communication skills in the practice session in which they are applied. This text shows shallow results on the application of some of their tactics with a group of students. We tested structural/procedural descriptions of hypothetical effects of communication using indicators proposed by Network Theory in terms topologies provided by the group. Without a conclusive result, we expect this paper helps to the creativity's investigation in the innovation sessions.

  8. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  9. Team building: conceptual, methodological, and applied considerations.

    Science.gov (United States)

    Beauchamp, Mark R; McEwan, Desmond; Waldhauser, Katrina J

    2017-08-01

    Team building has been identified as an important method of improving the psychological climate in which teams operate, as well as overall team functioning. Within the context of sports, team building interventions have consistently been found to result in improvements in team effectiveness. In this paper we review the extant literature on team building in sport, and address a range of conceptual, methodological, and applied considerations that have the potential to advance theory, research, and applied intervention initiatives within the field. This involves expanding the scope of team building strategies that have, to date, primarily focused on developing group cohesion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. TEACHING AND LEARNING METHODOLOGIES SUPPORTED BY ICT APPLIED IN COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    Jose CAPACHO

    2016-04-01

    Full Text Available The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory. Genetic-Cognitive Psychology Theory and Dialectics Psychology. Based on the theoretical framework the following methodologies were developed: Game Theory, Constructivist Approach, Personalized Teaching, Problem Solving, Cooperative Collaborative learning, Learning projects using ICT. These methodologies were applied to the teaching learning process during the Algorithms and Complexity – A&C course, which belongs to the area of ​​Computer Science. The course develops the concepts of Computers, Complexity and Intractability, Recurrence Equations, Divide and Conquer, Greedy Algorithms, Dynamic Programming, Shortest Path Problem and Graph Theory. The main value of the research is the theoretical support of the methodologies and their application supported by ICT using learning objects. The course aforementioned was built on the Blackboard platform evaluating the operation of methodologies. The results of the evaluation are presented for each of them, showing the learning outcomes achieved by students, which verifies that methodologies are functional.

  11. A Network Based Methodology to Reveal Patterns in Knowledge Transfer

    Directory of Open Access Journals (Sweden)

    Orlando López-Cruz

    2015-12-01

    Full Text Available This paper motivates, presents and demonstrates in use a methodology based in complex network analysis to support research aimed at identification of sources in the process of knowledge transfer at the interorganizational level. The importance of this methodology is that it states a unified model to reveal knowledge sharing patterns and to compare results from multiple researches on data from different periods of time and different sectors of the economy. This methodology does not address the underlying statistical processes. To do this, national statistics departments (NSD provide documents and tools at their websites. But this proposal provides a guide to model information inferences gathered from data processing revealing links between sources and recipients of knowledge being transferred and that the recipient detects as main source to new knowledge creation. Some national statistics departments set as objective for these surveys the characterization of innovation dynamics in firms and to analyze the use of public support instruments. From this characterization scholars conduct different researches. Measures of dimensions of the network composed by manufacturing firms and other organizations conform the base to inquiry the structure that emerges from taking ideas from other organizations to incept innovations. These two sets of data are actors of a two- mode-network. The link between two actors (network nodes, one acting as the source of the idea. The second one acting as the destination comes from organizations or events organized by organizations that “provide” ideas to other group of firms. The resulting demonstrated design satisfies the objective of being a methodological model to identify sources in knowledge transfer of knowledge effectively used in innovation.

  12. Network meta-analysis-highly attractive but more methodological research is needed

    Directory of Open Access Journals (Sweden)

    Singh Sonal

    2011-06-01

    Full Text Available Abstract Network meta-analysis, in the context of a systematic review, is a meta-analysis in which multiple treatments (that is, three or more are being compared using both direct comparisons of interventions within randomized controlled trials and indirect comparisons across trials based on a common comparator. To ensure validity of findings from network meta-analyses, the systematic review must be designed rigorously and conducted carefully. Aspects of designing and conducting a systematic review for network meta-analysis include defining the review question, specifying eligibility criteria, searching for and selecting studies, assessing risk of bias and quality of evidence, conducting a network meta-analysis, interpreting and reporting findings. This commentary summarizes the methodologic challenges and research opportunities for network meta-analysis relevant to each aspect of the systematic review process based on discussions at a network meta-analysis methodology meeting we hosted in May 2010 at the Johns Hopkins Bloomberg School of Public Health. Since this commentary reflects the discussion at that meeting, it is not intended to provide an overview of the field.

  13. Technologies, Methodologies and Challenges in Network Intrusion Detection and Prevention Systems

    Directory of Open Access Journals (Sweden)

    Nicoleta STANCIU

    2013-01-01

    Full Text Available This paper presents an overview of the technologies and the methodologies used in Network Intrusion Detection and Prevention Systems (NIDPS. Intrusion Detection and Prevention System (IDPS technologies are differentiated by types of events that IDPSs can recognize, by types of devices that IDPSs monitor and by activity. NIDPSs monitor and analyze the streams of network packets in order to detect security incidents. The main methodology used by NIDPSs is protocol analysis. Protocol analysis requires good knowledge of the theory of the main protocols, their definition, how each protocol works.

  14. How can social network analysis contribute to social behavior research in applied ethology?

    Science.gov (United States)

    Makagon, Maja M; McCowan, Brenda; Mench, Joy A

    2012-05-01

    Social network analysis is increasingly used by behavioral ecologists and primatologists to describe the patterns and quality of interactions among individuals. We provide an overview of this methodology, with examples illustrating how it can be used to study social behavior in applied contexts. Like most kinds of social interaction analyses, social network analysis provides information about direct relationships (e.g. dominant-subordinate relationships). However, it also generates a more global model of social organization that determines how individual patterns of social interaction relate to individual and group characteristics. A particular strength of this approach is that it provides standardized mathematical methods for calculating metrics of sociality across levels of social organization, from the population and group levels to the individual level. At the group level these metrics can be used to track changes in social network structures over time, evaluate the effect of the environment on social network structure, or compare social structures across groups, populations or species. At the individual level, the metrics allow quantification of the heterogeneity of social experience within groups and identification of individuals who may play especially important roles in maintaining social stability or information flow throughout the network.

  15. Design Methodology of a Sensor Network Architecture Supporting Urgent Information and Its Evaluation

    Science.gov (United States)

    Kawai, Tetsuya; Wakamiya, Naoki; Murata, Masayuki

    Wireless sensor networks are expected to become an important social infrastructure which helps our life to be safe, secure, and comfortable. In this paper, we propose design methodology of an architecture for fast and reliable transmission of urgent information in wireless sensor networks. In this methodology, instead of establishing single complicated monolithic mechanism, several simple and fully-distributed control mechanisms which function in different spatial and temporal levels are incorporated on each node. These mechanisms work autonomously and independently responding to the surrounding situation. We also show an example of a network architecture designed following the methodology. We evaluated the performance of the architecture by extensive simulation and practical experiments and our claim was supported by the results of these experiments.

  16. Digital processing methodology applied to exploring of radiological images

    International Nuclear Information System (INIS)

    Oliveira, Cristiane de Queiroz

    2004-01-01

    In this work, digital image processing is applied as a automatic computational method, aimed for exploring of radiological images. It was developed an automatic routine, from the segmentation and post-processing techniques to the radiology images acquired from an arrangement, consisting of a X-ray tube, target and filter of molybdenum, of 0.4 mm and 0.03 mm, respectively, and CCD detector. The efficiency of the methodology developed is showed in this work, through a case study, where internal injuries in mangoes are automatically detected and monitored. This methodology is a possible tool to be introduced in the post-harvest process in packing houses. A dichotomic test was applied to evaluate a efficiency of the method. The results show a success of 87.7% to correct diagnosis and 12.3% to failures to correct diagnosis with a sensibility of 93% and specificity of 80%. (author)

  17. Teaching and Learning Methodologies Supported by ICT Applied in Computer Science

    Science.gov (United States)

    Capacho, Jose

    2016-01-01

    The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory.…

  18. Tools and methodologies applied to eLearning

    OpenAIRE

    Seoane Pardo, Antonio M.; García-Peñalvo, Francisco José

    2006-01-01

    The aim of this paper is to show how eLearning technologies and methodologies should be useful for teaching and researching Logic. Firstly, a definition and explanation of eLearning and its main modalities will be given. Then, the most important elements and tools of eLearning activities will be shown. Finally, we will give three suggestions to improve learning experience with eLearning applied to Logic. Se muestran diversas tecnologías y metodologías de e-learning útiles en la enseñanza e...

  19. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  20. Teaching methodology for modeling reference evapotranspiration with artificial neural networks

    OpenAIRE

    Martí, Pau; Pulido Calvo, Inmaculada; Gutiérrez Estrada, Juan Carlos

    2015-01-01

    [EN] Artificial neural networks are a robust alternative to conventional models for estimating different targets in irrigation engineering, among others, reference evapotranspiration, a key variable for estimating crop water requirements. This paper presents a didactic methodology for introducing students in the application of artificial neural networks for reference evapotranspiration estimation using MatLab c . Apart from learning a specific application of this software wi...

  1. 3-D SURVEY APPLIED TO INDUSTRIAL ARCHAEOLOGY BY TLS METHODOLOGY

    Directory of Open Access Journals (Sweden)

    M. Monego

    2017-05-01

    Full Text Available This work describes the three-dimensional survey of “Ex Stazione Frigorifera Specializzata”: initially used for agricultural storage, during the years it was allocated to different uses until the complete neglect. The historical relevance and the architectural heritage that this building represents has brought the start of a recent renovation project and functional restoration. In this regard it was necessary a global 3-D survey that was based on the application and integration of different geomatic methodologies (mainly terrestrial laser scanner, classical topography, and GNSS. The acquisitions of point clouds was performed using different laser scanners: with time of flight (TOF and phase shift technologies for the distance measurements. The topographic reference network, needed for scans alignment in the same system, was measured with a total station. For the complete survey of the building, 122 scans were acquired and 346 targets were measured from 79 vertices of the reference network. Moreover, 3 vertices were measured with GNSS methodology in order to georeference the network. For the detail survey of machine room were executed 14 scans with 23 targets. The 3-D global model of the building have less than one centimeter of error in the alignment (for the machine room the error in alignment is not greater than 6 mm and was used to extract products such as longitudinal and transversal sections, plans, architectural perspectives, virtual scans. A complete spatial knowledge of the building is obtained from the processed data, providing basic information for restoration project, structural analysis, industrial and architectural heritage valorization.

  2. Spatiotemporal mapping of interictal spike propagation: a novel methodology applied to pediatric intracranial EEG recordings.

    Directory of Open Access Journals (Sweden)

    Samuel Tomlinson

    2016-12-01

    Full Text Available Synchronized cortical activity is implicated in both normative cognitive functioning andmany neurological disorders. For epilepsy patients with intractable seizures, irregular patterns ofsynchronization within the epileptogenic zone (EZ is believed to provide the network substratethrough which seizures initiate and propagate. Mapping the EZ prior to epilepsy surgery is critical fordetecting seizure networks in order to achieve post-surgical seizure control. However, automatedtechniques for characterizing epileptic networks have yet to gain traction in the clinical setting.Recent advances in signal processing and spike detection have made it possible to examine thespatiotemporal propagation of interictal spike discharges across the epileptic cortex. In this study, wepresent a novel methodology for detecting, extracting, and visualizing spike propagation anddemonstrate its potential utility as a biomarker for the epileptogenic zone. Eighteen pre-surgicalintracranial EEG recordings were obtained from pediatric patients ultimately experiencing favorable(i.e., seizure-free, n = 9 or unfavorable (i.e., seizure-persistent, n = 9 surgical outcomes. Novelalgorithms were applied to extract multi-channel spike discharges and visualize their spatiotemporalpropagation. Quantitative analysis of spike propagation was performed using trajectory clusteringand spatial autocorrelation techniques. Comparison of interictal propagation patterns revealed anincrease in trajectory organization (i.e., spatial autocorrelation among Sz-Free patients compared toSz-Persist patients. The pathophysiological basis and clinical implications of these findings areconsidered.

  3. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  4. The Methodology of Investigation of Intercultural Rhetoric applied to SFL

    Directory of Open Access Journals (Sweden)

    David Heredero Zorzo

    2016-12-01

    Full Text Available Intercultural rhetoric is a discipline which studies written discourse among individuals from different cultures. It is a very strong field in the Anglo-Saxon scientific world, especially referring to English as a second language, but in Spanish as a foreign language it is not as prominent. Intercultural rhetoric has provided applied linguistics with important methods of investigation, thus applying this to SFL could introduce interesting new perspectives on the subject. In this paper, we present the methodology of investigation of intercultural rhetoric, which is based on the use of different types of corpora for analysing genders, and follows the precepts of tertium comparationis. In addition, it uses techniques of ethnographic investigation. The purpose of this paper is to show the applications of this methodology to SFL and to outline future investigations in the same field.

  5. Applied Ontology Engineering in Cloud Services, Networks and Management Systems

    CERN Document Server

    Serrano Orozco, J Martín

    2012-01-01

    Metadata standards in today’s ICT sector are proliferating at unprecedented levels, while automated information management systems collect and process exponentially increasing quantities of data. With interoperability and knowledge exchange identified as a core challenge in the sector, this book examines the role ontology engineering can play in providing solutions to the problems of information interoperability and linked data. At the same time as introducing basic concepts of ontology engineering, the book discusses methodological approaches to formal representation of data and information models, thus facilitating information interoperability between heterogeneous, complex and distributed communication systems. In doing so, the text advocates the advantages of using ontology engineering in telecommunications systems. In addition, it offers a wealth of guidance and best-practice techniques for instances in which ontology engineering is applied in cloud services, computer networks and management systems. �...

  6. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  7. Applying graphs and complex networks to football metric interpretation.

    Science.gov (United States)

    Arriaza-Ardiles, E; Martín-González, J M; Zuniga, M D; Sánchez-Flores, J; de Saa, Y; García-Manso, J M

    2018-02-01

    This work presents a methodology for analysing the interactions between players in a football team, from the point of view of graph theory and complex networks. We model the complex network of passing interactions between players of a same team in 32 official matches of the Liga de Fútbol Profesional (Spain), using a passing/reception graph. This methodology allows us to understand the play structure of the team, by analysing the offensive phases of game-play. We utilise two different strategies for characterising the contribution of the players to the team: the clustering coefficient, and centrality metrics (closeness and betweenness). We show the application of this methodology by analyzing the performance of a professional Spanish team according to these metrics and the distribution of passing/reception in the field. Keeping in mind the dynamic nature of collective sports, in the future we will incorporate metrics which allows us to analyse the performance of the team also according to the circumstances of game-play and to different contextual variables such as, the utilisation of the field space, the time, and the ball, according to specific tactical situations. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Adaption of the temporal correlation coefficient calculation for temporal networks (applied to a real-world pig trade network).

    Science.gov (United States)

    Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim

    2016-01-01

    The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.

  9. Managing Complex Battlespace Environments Using Attack the Network Methodologies

    DEFF Research Database (Denmark)

    Mitchell, Dr. William L.

    This paper examines the last 8 years of development and application of Attack the Network (AtN) intelligence methodologies for creating shared situational understanding of complex battlespace environment and the development of deliberate targeting frameworks. It will present a short history...... of their development, how they are integrated into operational planning through strategies of deliberate targeting for modern operations. The paper will draw experience and case studies from Iraq, Syria, and Afghanistan and will offer some lessons learned as well as insight into the future of these methodologies....... Including their possible application on a national security level for managing longer strategic endeavors....

  10. Applying information network analysis to fire-prone landscapes: implications for community resilience

    Directory of Open Access Journals (Sweden)

    Derric B. Jacobs

    2017-03-01

    Full Text Available Resilient communities promote trust, have well-developed networks, and can adapt to change. For rural communities in fire-prone landscapes, current resilience strategies may prove insufficient in light of increasing wildfire risks due to climate change. It is argued that, given the complexity of climate change, adaptations are best addressed at local levels where specific social, cultural, political, and economic conditions are matched with local risks and opportunities. Despite the importance of social networks as key attributes of community resilience, research using social network analysis on coupled human and natural systems is scarce. Furthermore, the extent to which local communities in fire-prone areas understand climate change risks, accept the likelihood of potential changes, and have the capacity to develop collaborative mitigation strategies is underexamined, yet these factors are imperative to community resiliency. We apply a social network framework to examine information networks that affect perceptions of wildfire and climate change in Central Oregon. Data were collected using a mailed questionnaire. Analysis focused on the residents' information networks that are used to gain awareness of governmental activities and measures of community social capital. A two-mode network analysis was used to uncover information exchanges. Results suggest that the general public develops perceptions about climate change based on complex social and cultural systems rather than as patrons of scientific inquiry and understanding. It appears that perceptions about climate change itself may not be the limiting factor in these communities' adaptive capacity, but rather how they perceive local risks. We provide a novel methodological approach in understanding rural community adaptation and resilience in fire-prone landscapes and offer a framework for future studies.

  11. Second Law of Thermodynamics Applied to Metabolic Networks

    Science.gov (United States)

    Nigam, R.; Liang, S.

    2003-01-01

    We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.

  12. A replication and methodological critique of the study "Evaluating drug trafficking on the Tor Network".

    Science.gov (United States)

    Munksgaard, Rasmus; Demant, Jakob; Branwen, Gwern

    2016-09-01

    The development of cryptomarkets has gained increasing attention from academics, including growing scientific literature on the distribution of illegal goods using cryptomarkets. Dolliver's 2015 article "Evaluating drug trafficking on the Tor Network: Silk Road 2, the Sequel" addresses this theme by evaluating drug trafficking on one of the most well-known cryptomarkets, Silk Road 2.0. The research on cryptomarkets in general-particularly in Dolliver's article-poses a number of new questions for methodologies. This commentary is structured around a replication of Dolliver's original study. The replication study is not based on Dolliver's original dataset, but on a second dataset collected applying the same methodology. We have found that the results produced by Dolliver differ greatly from our replicated study. While a margin of error is to be expected, the inconsistencies we found are too great to attribute to anything other than methodological issues. The analysis and conclusions drawn from studies using these methods are promising and insightful. However, based on the replication of Dolliver's study, we suggest that researchers using these methodologies consider and that datasets be made available for other researchers, and that methodology and dataset metrics (e.g. number of downloaded pages, error logs) are described thoroughly in the context of web-o-metrics and web crawling. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. A social network perspective on teacher collaboration in schools: Theory, methodology, and applications

    NARCIS (Netherlands)

    Moolenaar, Nienke

    2012-01-01

    An emerging trend in educational research is the use of social network theory and methodology to understand how teacher collaboration can support or constrain teaching, learning, and educational change. This article provides a critical synthesis of educational literature on school social networks

  14. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  15. A Methodological Approach to Assess the Impact of Smarting Action on Electricity Transmission and Distribution Networks Related to Europe 2020 Targets

    Directory of Open Access Journals (Sweden)

    Andrea Bonfiglio

    2017-01-01

    Full Text Available The achievement of the so-called 2020 targets requested by the European Union (EU has determined a significant growth of proposals of solutions and of technical projects aiming at reducing the CO2 emissions and increasing the energy efficiency, as well as the penetration of Renewable Energy Sources (RES in the electric network. As many of them ask for funding from the EU itself, there is the necessity to define a methodology to rank them and decide which projects should be sponsored to obtain the maximum effect on the EU 2020 targets. The present paper aims at (i defining a set of Key Performance Indicators (KPIs to compare different proposals, (ii proposing an analytical methodology to evaluate the defined KPIs and (iii evaluating the maximum impact that the considered action is capable of producing. The proposed methodology is applied to a set of possible interventions performed on a benchmark transmission network test case, in order to show that the defined indicators can be either calculated or measured and that they are useful to rank different “smarting actions”.

  16. NeOn Methodology for Building Ontology Networks: Specification, Scheduling and Reuse

    OpenAIRE

    Suárez-Figueroa, Mari Carmen

    2010-01-01

    A new ontology development paradigm has started; its emphasis lies on the reuse and possible subsequent reengineering of knowledge resources, on the collaborative and argumentative ontology development, and on the building of ontology networks; this new trend is the opposite of building new ontologies from scratch. To help ontology developers in this new paradigm, it is important to provide strong methodological support. This thesis presents some contributions to the methodological area of...

  17. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bri Rolston

    2005-06-01

    Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

  18. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil); Pereira, Iraci Martinez; Silva, Antonio Teixeira e, E-mail: martinez@ipen.b, E-mail: teixeira@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  19. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez; Silva, Antonio Teixeira e

    2011-01-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  20. Hazard interactions and interaction networks (cascades) within multi-hazard methodologies

    Science.gov (United States)

    Gill, Joel C.; Malamud, Bruce D.

    2016-08-01

    This paper combines research and commentary to reinforce the importance of integrating hazard interactions and interaction networks (cascades) into multi-hazard methodologies. We present a synthesis of the differences between multi-layer single-hazard approaches and multi-hazard approaches that integrate such interactions. This synthesis suggests that ignoring interactions between important environmental and anthropogenic processes could distort management priorities, increase vulnerability to other spatially relevant hazards or underestimate disaster risk. In this paper we proceed to present an enhanced multi-hazard framework through the following steps: (i) description and definition of three groups (natural hazards, anthropogenic processes and technological hazards/disasters) as relevant components of a multi-hazard environment, (ii) outlining of three types of interaction relationship (triggering, increased probability, and catalysis/impedance), and (iii) assessment of the importance of networks of interactions (cascades) through case study examples (based on the literature, field observations and semi-structured interviews). We further propose two visualisation frameworks to represent these networks of interactions: hazard interaction matrices and hazard/process flow diagrams. Our approach reinforces the importance of integrating interactions between different aspects of the Earth system, together with human activity, into enhanced multi-hazard methodologies. Multi-hazard approaches support the holistic assessment of hazard potential and consequently disaster risk. We conclude by describing three ways by which understanding networks of interactions contributes to the theoretical and practical understanding of hazards, disaster risk reduction and Earth system management. Understanding interactions and interaction networks helps us to better (i) model the observed reality of disaster events, (ii) constrain potential changes in physical and social vulnerability

  1. SEMANTIC NETWORKS: THEORETICAL, TECHNICAL, METHODOLOGIC AND ANALYTICAL ASPECTS

    Directory of Open Access Journals (Sweden)

    José Ángel Vera Noriega

    2005-09-01

    Full Text Available This work is a review of the methodological procedures and cares for the measurement of the connotative meanings which will be used in the elaboration of instruments with ethnic validity. Beginning from the techniques originally proposed by Figueroa et al. (1981 and later described by Lagunes (1993, the intention is to offer a didactic panorama to carry out the measurement by semantic networks introducing some recommendations derived from the studies performed with this method.

  2. Tourism Methodologies - New Perspectives, Practices and Procedures

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in differen...... codings and analysis, and tapping into the global network of social media.......This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...

  3. Consensus-based methodology for detection communities in multilayered networks

    Science.gov (United States)

    Karimi-Majd, Amir-Mohsen; Fathian, Mohammad; Makrehchi, Masoud

    2018-03-01

    Finding groups of network users who are densely related with each other has emerged as an interesting problem in the area of social network analysis. These groups or so-called communities would be hidden behind the behavior of users. Most studies assume that such behavior could be understood by focusing on user interfaces, their behavioral attributes or a combination of these network layers (i.e., interfaces with their attributes). They also assume that all network layers refer to the same behavior. However, in real-life networks, users' behavior in one layer may differ from their behavior in another one. In order to cope with these issues, this article proposes a consensus-based community detection approach (CBC). CBC finds communities among nodes at each layer, in parallel. Then, the results of layers should be aggregated using a consensus clustering method. This means that different behavior could be detected and used in the analysis. As for other significant advantages, the methodology would be able to handle missing values. Three experiments on real-life and computer-generated datasets have been conducted in order to evaluate the performance of CBC. The results indicate superiority and stability of CBC in comparison to other approaches.

  4. Event based uncertainty assessment in urban drainage modelling, applying the GLUE methodology

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Beven, K.J.; Jensen, Jacob Birk

    2008-01-01

    of combined sewer overflow. The GLUE methodology is used to test different conceptual setups in order to determine if one model setup gives a better goodness of fit conditional on the observations than the other. Moreover, different methodological investigations of GLUE are conducted in order to test......In the present paper an uncertainty analysis on an application of the commercial urban drainage model MOUSE is conducted. Applying the Generalized Likelihood Uncertainty Estimation (GLUE) methodology the model is conditioned on observation time series from two flow gauges as well as the occurrence...... if the uncertainty analysis is unambiguous. It is shown that the GLUE methodology is very applicable in uncertainty analysis of this application of an urban drainage model, although it was shown to be quite difficult of get good fits of the whole time series....

  5. Covariance methodology applied to uncertainties in I-126 disintegration rate measurements

    International Nuclear Information System (INIS)

    Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.

    1996-01-01

    The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)

  6. Carbohydrate metabolism teaching strategy for the Pharmacy course, applying active teaching methodology

    Directory of Open Access Journals (Sweden)

    Uderlei Donizete Silveira Covizzi

    2012-12-01

    Full Text Available The traditional teaching method has been widely questioned on the development of skills and abilities in training healthcare professionals. In the traditional methodology the main transmitter of knowledge is the teacher while students assume passive spectator role. Some Brazilian institutions broke with this model, structuring the curriculum to student-centered learning. Some medical schools have adopted the Problem Based Learning (PBL, a methodology that presents problem questions, to be encountered by future physicians, for resolution in small tutorial groups. Our work proposes to apply an active teaching-learning methodology addressing carbohydrate metabolism during the discipline of biochemistry for under graduation students from pharmacy course. Thus, the academic content was presented through brief and objective talks. Later, learners were split into tutorial groups for the resolution of issues in context. During the activities, the teacher drove the discussion to the issues elucidation. At the end of the module learners evaluated the teaching methodology by means of an applied questionnaire and the developed content was evaluated by an usual individual test. The questionnaire analysis indicates that students believe they have actively participated in the teaching-learning process, being encouraged to discuss and understand the theme. The answers highlight closer ties between students and tutor. According to the professor, there is a greater student engagement with learning. It is concluded that an innovative methodology, where the primary responsibility for learning is centered in the student himself, besides to increase the interest in learning, facilitates learning by cases discussion in groups. The issues contextualization establishes a narrowing between theory and practice.

  7. Applying differential dynamic logic to reconfigurable biological networks.

    Science.gov (United States)

    Figueiredo, Daniel; Martins, Manuel A; Chaves, Madalena

    2017-09-01

    Qualitative and quantitative modeling frameworks are widely used for analysis of biological regulatory networks, the former giving a preliminary overview of the system's global dynamics and the latter providing more detailed solutions. Another approach is to model biological regulatory networks as hybrid systems, i.e., systems which can display both continuous and discrete dynamic behaviors. Actually, the development of synthetic biology has shown that this is a suitable way to think about biological systems, which can often be constructed as networks with discrete controllers, and present hybrid behaviors. In this paper we discuss this approach as a special case of the reconfigurability paradigm, well studied in Computer Science (CS). In CS there are well developed computational tools to reason about hybrid systems. We argue that it is worth applying such tools in a biological context. One interesting tool is differential dynamic logic (dL), which has recently been developed by Platzer and applied to many case-studies. In this paper we discuss some simple examples of biological regulatory networks to illustrate how dL can be used as an alternative, or also as a complement to methods already used. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Proposing C4ISR Architecture Methodology for Homeland Security

    National Research Council Canada - National Science Library

    Farah-Stapleton, Monica F; Dimarogonas, James; Eaton, Rodney; Deason, Paul J

    2004-01-01

    This presentation presents how a network architecture methodology developed for the Army's Future Force could be applied to the requirements of Civil Support, Homeland Security/Homeland Defense (CS HLS/HLD...

  9. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  10. Benford's Law Applies to Online Social Networks.

    Science.gov (United States)

    Golbeck, Jennifer

    2015-01-01

    Benford's Law states that, in naturally occurring systems, the frequency of numbers' first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30% of the time, and are six times more common than numbers beginning with a 9. We show that Benford's Law applies to social and behavioral features of users in online social networks. Using social data from five major social networks (Facebook, Twitter, Google Plus, Pinterest, and LiveJournal), we show that the distribution of first significant digits of friend and follower counts for users in these systems follow Benford's Law. The same is true for the number of posts users make. We extend this to egocentric networks, showing that friend counts among the people in an individual's social network also follows the expected distribution. We discuss how this can be used to detect suspicious or fraudulent activity online and to validate datasets.

  11. Benford's Law Applies to Online Social Networks.

    Directory of Open Access Journals (Sweden)

    Jennifer Golbeck

    Full Text Available Benford's Law states that, in naturally occurring systems, the frequency of numbers' first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30% of the time, and are six times more common than numbers beginning with a 9. We show that Benford's Law applies to social and behavioral features of users in online social networks. Using social data from five major social networks (Facebook, Twitter, Google Plus, Pinterest, and LiveJournal, we show that the distribution of first significant digits of friend and follower counts for users in these systems follow Benford's Law. The same is true for the number of posts users make. We extend this to egocentric networks, showing that friend counts among the people in an individual's social network also follows the expected distribution. We discuss how this can be used to detect suspicious or fraudulent activity online and to validate datasets.

  12. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    Energy Technology Data Exchange (ETDEWEB)

    Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  13. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods

  14. A study on methodologies for assessing safety critical network's risk impact on Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lim, T. J.; Lee, H. J.; Park, S. K.; Seo, S. J.

    2006-08-01

    The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for Nuclear Power Plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of the first year study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks

  15. Network evolution driven by dynamics applied to graph coloring

    International Nuclear Information System (INIS)

    Wu Jian-She; Li Li-Guang; Yu Xin; Jiao Li-Cheng; Wang Xiao-Hua

    2013-01-01

    An evolutionary network driven by dynamics is studied and applied to the graph coloring problem. From an initial structure, both the topology and the coupling weights evolve according to the dynamics. On the other hand, the dynamics of the network are determined by the topology and the coupling weights, so an interesting structure-dynamics co-evolutionary scheme appears. By providing two evolutionary strategies, a network described by the complement of a graph will evolve into several clusters of nodes according to their dynamics. The nodes in each cluster can be assigned the same color and nodes in different clusters assigned different colors. In this way, a co-evolution phenomenon is applied to the graph coloring problem. The proposed scheme is tested on several benchmark graphs for graph coloring

  16. Applied network security monitoring collection, detection, and analysis

    CERN Document Server

    Sanders, Chris

    2013-01-01

    Applied Network Security Monitoring is the essential guide to becoming an NSM analyst from the ground up. This book takes a fundamental approach to NSM, complete with dozens of real-world examples that teach you the key concepts of NSM. Network security monitoring is based on the principle that prevention eventually fails. In the current threat landscape, no matter how much you try, motivated attackers will eventually find their way into your network. At that point, it is your ability to detect and respond to that intrusion that can be the difference between a small incident and a major di

  17. EnergiTools. A methodology for performance monitoring and diagnosis

    International Nuclear Information System (INIS)

    Ancion, P.; Bastien, R.; Ringdahl, K.

    2000-01-01

    EnergiTools is a performance monitoring and diagnostic tool that combines the power of on-line process data acquisition with advanced diagnosis methodologies. Analytical models based on thermodynamic principles are combined with neural networks to validate sensor data and to estimate missing or faulty measurements. Advanced diagnostic technologies are then applied to point out potential faults and areas to be investigated further. The diagnosis methodologies are based on Bayesian belief networks. Expert knowledge is captured in the form of the fault-symptom relationships and includes historical information as the likelihood of faults and symptoms. The methodology produces the likelihood of component failure root causes using the expert knowledge base. EnergiTools is used at Ringhals nuclear power plants. It has led to the diagnosis of various performance issues. Three case studies based on this plant data and model are presented and illustrate the diagnosis support methodologies implemented in EnergiTools . In the first case, the analytical data qualification technique points out several faulty measurements. The application of a neural network for the estimation of the nuclear reactor power by interpreting several plant indicators is then illustrated. The use of the Bayesian belief networks is finally described. (author)

  18. Analytical Chemistry as Methodology in Modern Pure and Applied Chemistry

    OpenAIRE

    Honjo, Takaharu

    2001-01-01

    Analytical chemistry is an indispensable methodology in pure and applied chemistry, which is often compared to a foundation stone of architecture. In the home page of jsac, it is said that analytical chemistry is a learning of basic science, which treats the development of method in order to get usefull chemical information of materials by means of detection, separation, and characterization. Analytical chemistry has recently developed into analytical sciences, which treats not only analysis ...

  19. Methodological Approaches to Locating Outlets of the Franchise Retail Network

    Directory of Open Access Journals (Sweden)

    Grygorenko Tetyana M.

    2016-08-01

    Full Text Available Methodical approaches to selecting strategic areas of managing the future location of franchise retail network outlets are presented. The main stages in the assessment of strategic areas of managing the future location of franchise retail network outlets have been determined and the evaluation criteria have been suggested. Since such selection requires consideration of a variety of indicators and directions of the assessment, the author proposes a scale of evaluation, which allows generalizing and organizing the research data and calculations of the previous stages of the analysis. The most important criteria and sequence of the selection of the potential franchisees for the franchise retail network have been identified, the technique for their evaluation has been proposed. The use of the suggested methodological approaches will allow the franchiser making sound decisions on the selection of potential target markets, minimizing expenditures of time and efforts on the selection of franchisees and hence optimizing the process of development of the franchise retail network, which will contribute to the formation of its structure.

  20. METHODOLOGY OF RESEARCH AND DEVELOPMENT MANAGEMENT OF REGIONAL NETWORK ECONOMY

    Directory of Open Access Journals (Sweden)

    O.I. Botkin

    2007-06-01

    Full Text Available Information practically of all the Russian regions economy branches and development by managing subjects is information − communicative the Internet technologies render huge influence on economic attitudes development in the environment of regional business: there are new forms of interaction of managing subjects and change is information − organizational structures of regional business management. Integrated image of the set forth above innovations is the regional network economy representing the interactive environment in which on high speed and with minimal transaction (R.H.Coase’s costs are performed social economic and commodity monetary attitudes between managing subjects of region with use of Internet global network interactive opportunities. The urgency of the regional network economy phenomenon research, first of all, is caused by necessity of a substantiation of regional network economy methodology development and management mechanisms development by its infrastructure with the purpose of regional business efficiency increase. In our opinion, the decision of these problems will be the defining factor of effective economic development maintenance and russian regions economy growth in the near future.

  1. Applied Knowledge Management to Mitigate Cognitive Load in Network-Enabled Mission Command

    Science.gov (United States)

    2017-11-22

    ARL-TN-0859 ● NOV 2017 US Army Research Laboratory Applied Knowledge Management to Mitigate Cognitive Load in Network-Enabled...Applied Knowledge Management to Mitigate Cognitive Load in Network-Enabled Mission Command by John K Hawley Human Research and Engineering...REPORT TYPE Technical Note 3. DATES COVERED (From - To) 1 May 2016–20 April 2017 4. TITLE AND SUBTITLE Applied Knowledge Management to Mitigate

  2. A Data Preparation Methodology in Data Mining Applied to Mortality Population Databases.

    Science.gov (United States)

    Pérez, Joaquín; Iturbide, Emmanuel; Olivares, Víctor; Hidalgo, Miguel; Martínez, Alicia; Almanza, Nelva

    2015-11-01

    It is known that the data preparation phase is the most time consuming in the data mining process, using up to 50% or up to 70% of the total project time. Currently, data mining methodologies are of general purpose and one of their limitations is that they do not provide a guide about what particular task to develop in a specific domain. This paper shows a new data preparation methodology oriented to the epidemiological domain in which we have identified two sets of tasks: General Data Preparation and Specific Data Preparation. For both sets, the Cross-Industry Standard Process for Data Mining (CRISP-DM) is adopted as a guideline. The main contribution of our methodology is fourteen specialized tasks concerning such domain. To validate the proposed methodology, we developed a data mining system and the entire process was applied to real mortality databases. The results were encouraging because it was observed that the use of the methodology reduced some of the time consuming tasks and the data mining system showed findings of unknown and potentially useful patterns for the public health services in Mexico.

  3. A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks

    Directory of Open Access Journals (Sweden)

    Shibo Luo

    2015-12-01

    Full Text Available Software-Defined Networking-based Mobile Networks (SDN-MNs are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism.

  4. A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks.

    Science.gov (United States)

    Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua

    2015-12-17

    Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism.

  5. Applying the Tropos Methodology for Analysing Web Services Requirements and Reasoning about Qualities of Services

    NARCIS (Netherlands)

    Aiello, Marco; Giorgini, Paolo

    2004-01-01

    The shift in software engineering from the design, implementation and management of isolated software elements towards a network of autonomous interoperable service is calling for a shift in the way software is designed. We propose the use of the agent-oriented methodology Tropos for the analysis of

  6. Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 3: how to assess methodological limitations.

    Science.gov (United States)

    Munthe-Kaas, Heather; Bohren, Meghan A; Glenton, Claire; Lewin, Simon; Noyes, Jane; Tunçalp, Özge; Booth, Andrew; Garside, Ruth; Colvin, Christopher J; Wainwright, Megan; Rashidian, Arash; Flottorp, Signe; Carlsen, Benedicte

    2018-01-25

    The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's methodological limitations component. We developed the methodological limitations component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual methodological limitations component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define methodological limitations as the extent to which there are concerns about the design or conduct of the primary studies that contributed evidence to an individual review finding. In this paper, we describe the methodological limitations component and its rationale and offer guidance on how to assess methodological limitations of a review finding as part of the CERQual approach. This guidance outlines the information required to assess methodological limitations component, the steps that need to be taken to assess methodological limitations of data contributing to a review finding and examples of methodological limitation assessments. This paper provides guidance for review authors and others on undertaking an assessment of methodological limitations in the context of the CERQual

  7. Methodology applied in Cuba for siting, designing, and building a radioactive waste repository under safety conditions

    International Nuclear Information System (INIS)

    Orbera, L.; Peralta, J.L.; Franklin, R.; Gil, R.; Chales, G.; Rodriguez, A.

    1993-01-01

    The work presents the methodology used in Cuba for siting, designing, and building a radioactive waste repository safely. This methodology covers both the technical and socio-economic factors, as well as those of design and construction so as to have a safe siting for this kind of repository under Cuba especial condition. Applying this methodology will results in a safe repository

  8. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  9. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    Science.gov (United States)

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  10. A Methodology for a Sustainable CO2 Capture and Utilization Network

    DEFF Research Database (Denmark)

    Frauzem, Rebecca; Fjellerup, Kasper; Gani, Rafiqul

    2015-01-01

    hydrogenation highlights the application. This case study illustrates the utility of the utilization network and elements of the methodology being developed. In addition, the conversion process is linked with carbon capture to evaluate the overall sustainability. Finally, the production of the other raw...... of Climate Change. New York: Cambridge University Press, 2007. [2] J. Wilcox, Carbon Capture. New York: Springer, 2012....

  11. Retail optimization in Romanian metallurgical industry by applying of fuzzy networks concept

    Directory of Open Access Journals (Sweden)

    Ioana Adrian

    2017-01-01

    Full Text Available Our article presents possibilities of applying the concept Fuzzy Networks for an efficient metallurgical industry in Romania. We also present and analyze Fuzzy Networks complementary concepts, such as Expert Systems (ES, Enterprise Resource Planning (ERP, Analytics and Intelligent Strategies (SAI. The main results of our article are based on a case study of the possibilities of applying these concepts in metallurgy through Fuzzy Networks. Also, it is presented a case study on the application of the FUZZY concept on the Romanian metallurgical industry.

  12. Applying Trusted Network Technology To Process Control Systems

    Science.gov (United States)

    Okhravi, Hamed; Nicol, David

    Interconnections between process control networks and enterprise networks expose instrumentation and control systems and the critical infrastructure components they operate to a variety of cyber attacks. Several architectural standards and security best practices have been proposed for industrial control systems. However, they are based on older architectures and do not leverage the latest hardware and software technologies. This paper describes new technologies that can be applied to the design of next generation security architectures for industrial control systems. The technologies are discussed along with their security benefits and design trade-offs.

  13. Voltage regulation in MV networks with dispersed generations by a neural-based multiobjective methodology

    Energy Technology Data Exchange (ETDEWEB)

    Galdi, Vincenzo [Dipartimento di Ingegneria dell' Informazione e Ingegneria Elettrica, Universita degli studi di Salerno, Via Ponte Don Melillo 1, 84084 Fisciano (Italy); Vaccaro, Alfredo; Villacci, Domenico [Dipartimento di Ingegneria, Universita degli Studi del Sannio, Piazza Roma 21, 82100 Benevento (Italy)

    2008-05-15

    This paper puts forward the role of learning techniques in addressing the problem of an efficient and optimal centralized voltage control in distribution networks equipped with dispersed generation systems (DGSs). The proposed methodology employs a radial basis function network (RBFN) to identify the multidimensional nonlinear mapping between a vector of observable variables describing the network operating point and the optimal set points of the voltage regulating devices. The RBFN is trained by numerical data generated by solving the voltage regulation problem for a set of network operating points by a rigorous multiobjective solution methodology. The RBFN performance is continuously monitored by a supervisor process that notifies when there is the need of a more accurate solution of the voltage regulation problem if nonoptimal network operating conditions (ex post monitoring) or excessive distances between the actual network state and the neuron's centres (ex ante monitoring) are detected. A more rigorous problem solution, if required, can be obtained by solving the voltage regulation problem by a conventional multiobjective optimization technique. This new solution, in conjunction with the corresponding input vector, is then adopted as a new train data sample to adapt the RBFN. This online training process allows RBFN to (i) adaptively learn the more representative domain space regions of the input/output mapping without needing a prior knowledge of a complete and representative training set, and (ii) manage effectively any time varying phenomena affecting this mapping. The results obtained by simulating the regulation policy in the case of a medium-voltage network are very promising. (author)

  14. Framework for applying RI-ISI methodology for Indian PHWRs

    International Nuclear Information System (INIS)

    Vinod, Gopika; Saraf, R.K.; Ghosh, A.K.; Kushwaha, H.S.

    2006-01-01

    Risk Informed In-Service Inspection (RI-ISI) aims at categorizing the components for In-Service inspection based on their contribution to Risk. For defining the contribution of risk from components, their failure probabilities and its subsequent effect on Core Damage Frequency (CDF) needs to be evaluated using Probabilistic Safety Assessment methodology. During the last several years, both the U.S. Nuclear Regulatory Commission (NRC) and the nuclear industry have recognized that Probabilistic Safety Assessment (PSA) has evolved to be more useful in supplementing traditional engineering approaches in reactor regulation. The paper highlights the various stages involved in applying RI-ISI and then compares the findings with existing ISI practices. (author)

  15. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  16. METHODOLOGY FOR FORMING MUTUALLY BENEFICIAL NETWORK INTERACTION BETWEEN SMALL CITIES AND DISTRICT CENTRES

    Directory of Open Access Journals (Sweden)

    Nikolay A. Ivanov

    2017-01-01

    Full Text Available Abstract. Objectives The aim of the study is to develop a methodology for networking between small towns and regional centres on the basis of developing areas of mutual benefit. It is important to assess the possibility of cooperation between small towns and regional centres and local selfgovernment bodies on the example of individual territorial entities of Russia in the context of the formation and strengthening of networks and support for territorial development. Methods Systemic and functional methodical approaches were taken. The modelling of socio-economic processes provides a visual representation of the direction of positive changes for small towns and regional centres of selected Subjects of the Russian Federation. Results Specific examples of cooperation between small towns and district centres are revealed in some areas; these include education, trade and public catering, tourist and recreational activities. The supporting role of subsystems, including management, regulatory activity, transport and logistics, is described. Schemes, by to which mutually beneficial network interaction is formed, are characterised in terms of the specific advantages accruing to each network subject. Economic benefits of realising interaction between small cities and regional centres are discussed. The methodology is based on assessing the access of cities to commutation, on which basis contemporary regional and city networks are formed. Conclusion On the basis of the conducted study, a list of areas for mutually beneficial networking between small towns and district centres has been identified, allowing the appropriate changes in regional economic policies to be effected in terms of programmes aimed at the development of regions and small towns, including those suffering from economic depression.

  17. Flow regime identification methodology with MCNP-X code and artificial neural network

    International Nuclear Information System (INIS)

    Salgado, Cesar M.; Instituto de Engenharia Nuclear; Schirru, Roberto; Brandao, Luis E.B.; Pereira, Claudio M.N.A.

    2009-01-01

    This paper presents flow regimes identification methodology in multiphase system in annular, stratified and homogeneous oil-water-gas regimes. The principle is based on recognition of the pulse height distributions (PHD) from gamma-ray with supervised artificial neural network (ANN) systems. The detection geometry simulation comprises of two NaI(Tl) detectors and a dual-energy gamma-ray source. The measurement of scattered radiation enables the dual modality densitometry (DMD) measurement principle to be explored. Its basic principle is to combine the measurement of scattered and transmitted radiation in order to acquire information about the different flow regimes. The PHDs obtained by the detectors were used as input to ANN. The data sets required for training and testing the ANN were generated by the MCNP-X code from static and ideal theoretical models of multiphase systems. The ANN correctly identified the three different flow regimes for all data set evaluated. The results presented show that PHDs examined by ANN may be applied in the successfully flow regime identification. (author)

  18. Analysing the Correlation between Social Network Analysis Measures and Performance of Students in Social Network-Based Engineering Education

    Science.gov (United States)

    Putnik, Goran; Costa, Eric; Alves, Cátia; Castro, Hélio; Varela, Leonilde; Shah, Vaibhav

    2016-01-01

    Social network-based engineering education (SNEE) is designed and implemented as a model of Education 3.0 paradigm. SNEE represents a new learning methodology, which is based on the concept of social networks and represents an extended model of project-led education. The concept of social networks was applied in the real-life experiment,…

  19. Value-Creating Networks: Organizational Issues and Challenges

    Science.gov (United States)

    Allee, Verna

    2009-01-01

    Purpose: The purpose of this paper is to provide examples of evaluating value-creating networks and to address the organizational issues and challenges of a network orientation. Design/methodology/approach: Value network analysis was first developed in 1993 and was adapted in 1997 for intangible asset management. It has been applied from shopfloor…

  20. A Novel Water Supply Network Sectorization Methodology Based on a Complete Economic Analysis, Including Uncertainties

    Directory of Open Access Journals (Sweden)

    Enrique Campbell

    2016-04-01

    Full Text Available The core idea behind sectorization of Water Supply Networks (WSNs is to establish areas partially isolated from the rest of the network to improve operational control. Besides the benefits associated with sectorization, some drawbacks must be taken into consideration by water operators: the economic investment associated with both boundary valves and flowmeters and the reduction of both pressure and system resilience. The target of sectorization is to properly balance these negative and positive aspects. Sectorization methodologies addressing the economic aspects mainly consider costs of valves and flowmeters and of energy, and the benefits in terms of water saving linked to pressure reduction. However, sectorization entails other benefits, such as the reduction of domestic consumption, the reduction of burst frequency and the enhanced capacity to detect and intervene over future leakage events. We implement a development proposed by the International Water Association (IWA to estimate the aforementioned benefits. Such a development is integrated in a novel sectorization methodology based on a social network community detection algorithm, combined with a genetic algorithm optimization method and Monte Carlo simulation. The methodology is implemented over a fraction of the WSN of Managua city, capital of Nicaragua, generating a net benefit of 25,572 $/year.

  1. Fiber-Optic Temperature and Pressure Sensors Applied to Radiofrequency Thermal Ablation in Liver Phantom: Methodology and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Daniele Tosi

    2015-01-01

    Full Text Available Radiofrequency thermal ablation (RFA is a procedure aimed at interventional cancer care and is applied to the treatment of small- and midsize tumors in lung, kidney, liver, and other tissues. RFA generates a selective high-temperature field in the tissue; temperature values and their persistency are directly related to the mortality rate of tumor cells. Temperature measurement in up to 3–5 points, using electrical thermocouples, belongs to the present clinical practice of RFA and is the foundation of a physical model of the ablation process. Fiber-optic sensors allow extending the detection of biophysical parameters to a vast plurality of sensing points, using miniature and noninvasive technologies that do not alter the RFA pattern. This work addresses the methodology for optical measurement of temperature distribution and pressure using four different fiber-optic technologies: fiber Bragg gratings (FBGs, linearly chirped FBGs (LCFBGs, Rayleigh scattering-based distributed temperature system (DTS, and extrinsic Fabry-Perot interferometry (EFPI. For each instrument, methodology for ex vivo sensing, as well as experimental results, is reported, leading to the application of fiber-optic technologies in vivo. The possibility of using a fiber-optic sensor network, in conjunction with a suitable ablation device, can enable smart ablation procedure whereas ablation parameters are dynamically changed.

  2. Modeling of Throughput in Production Lines Using Response Surface Methodology and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Federico Nuñez-Piña

    2018-01-01

    Full Text Available The problem of assigning buffers in a production line to obtain an optimum production rate is a combinatorial problem of type NP-Hard and it is known as Buffer Allocation Problem. It is of great importance for designers of production systems due to the costs involved in terms of space requirements. In this work, the relationship among the number of buffer slots, the number of work stations, and the production rate is studied. Response surface methodology and artificial neural network were used to develop predictive models to find optimal throughput values. 360 production rate values for different number of buffer slots and workstations were used to obtain a fourth-order mathematical model and four hidden layers’ artificial neural network. Both models have a good performance in predicting the throughput, although the artificial neural network model shows a better fit (R=1.0000 against the response surface methodology (R=0.9996. Moreover, the artificial neural network produces better predictions for data not utilized in the models construction. Finally, this study can be used as a guide to forecast the maximum or near maximum throughput of production lines taking into account the buffer size and the number of machines in the line.

  3. Applying of component system development in object methodology, case study

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    Full Text Available To create computarization target software as a component system has been a very strong requirement for the last 20 years of software developing. Finally, the architectural components are self-contained units, presenting not only partial and overall system behavior, but also cooperating with each other on the basis of their interfaces. Among others, components have allowed flexible modification of processes the behavior of which is the foundation of components behavior without changing the life of the component system. On the other hand, the component system makes it possible, at design time, to create numerous new connections between components and thus creating modified system behaviors. This all enables the company management to perform, at design time, required behavioral changes of processes in accordance with the requirements of changing production and market.The development of software which is generally referred to as SDP (Software Development Process contains two directions. The first one, called CBD (Component–Based Development, is dedicated to the development of component–based systems CBS (Component–based System, the second target is the development of software under the influence of SOA (Service–Oriented Architecture. Both directions are equipped with their different development methodologies. The subject of this paper is only the first direction and application of development of component–based systems in its object–oriented methodologies. The requirement of today is to carry out the development of component-based systems in the framework of developed object–oriented methodologies precisely in the way of a dominant style. In some of the known methodologies, however, this development is not completely transparent and is not even recognized as dominant. In some cases, it is corrected by the special meta–integration models of component system development into an object methodology.This paper presents a case study

  4. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    Science.gov (United States)

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  5. Methodology for uranium compounds characterization applied to biomedical monitoring

    International Nuclear Information System (INIS)

    Ansoborlo, E.; Chalabreysse, J.; Henge-Napoli, M.H.; Pujol, E.

    1991-01-01

    Chronic exposure and accidental contamination to uranium compounds in the nuclear industry, led the authors to develop a methodology in order to characterize those compounds applied to biomedical monitoring. Such a methodology, based on the recommendation of the ICRP and the assessment of Annual Limit on Intake (ALI) values, involves two main steps: (1) The characterization of the industrial compound, i.e. its physico-chemical properties like density (g cm -3 ), specific area (m 2 g -1 ), x-ray spectrum (crystalline form), solid infrared spectrum (wavelength and bounds), mass spectrometry (isotopic composition), and particle size distribution including measurement of the Activity Median Aerodynamic Diameter (AMAD). They'll specially study aging and hydration state of some compounds. (2) The study of in vitro solubility in several biochemical medium like bicarbonates, Basal Medium Eagle (BME) used in cellular culture, Gamble solvent, which is a serum simulant, with oxygen bubbling, and Gamble added with superoxide anions O2 - . Those different mediums allow one to understand the dissolution mechanisms (oxidation, chelating effects...) and to give ICRP classification D, W, or Y. Those two steps are essential to assess a biomedical monitoring either in routine or accidental exposure, and to calculate the ALI. Results on UO3, UF4 and U02 in the French uranium industry are given

  6. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    International Nuclear Information System (INIS)

    Park, Sukyoung; Heo, Gyunyoung; Kim, Jung Taek; Kim, Tae Wan

    2014-01-01

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  7. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sukyoung; Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Jung Taek [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Tae Wan [Kepco International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  8. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Science.gov (United States)

    Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee

    2016-02-01

    The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  9. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Directory of Open Access Journals (Sweden)

    Saro Lee

    2016-02-01

    Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  10. Towards the integration of social network analysis in an inter-organizational networks perspective

    DEFF Research Database (Denmark)

    Bergenholtz, Carsten; Waldstrøm, Christian

    This conceptual paper deals with the issue of studying inter-organizational networks while applying social network analysis (SNA). SNA is a widely recognized technique in network research, particularly within intra-organizational settings, while there seems to be a significant gap in the inter......-organizational setting. Based on a literature review of both SNA as a methodology and/or theory and the field of inter-organizational networks, the aim is to gain an overview in order to provide a clear setting for SNA in inter-organizational research....

  11. Group method of data handling and neral networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio

    2011-01-01

    The increasing demand in the complexity, efficiency and reliability in modern industrial systems stimulated studies on control theory applied to the development of Monitoring and Fault Detection system. In this work a new Monitoring and Fault Detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and Artificial Neural Networks (ANNs) which was applied to the IEA-R1 research reactor at IPEN. The Monitoring and Fault Detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second part to the process information using ANNs. The GMDH algorithm was used in two different ways: firstly, the GMDH algorithm was used to generate a better database estimated, called matrix z , which was used to train the ANNs. After that, the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one Theoretical Model and four Models using different sets of reactor variables. After an exhausting study dedicated to the sensors Monitoring, the Fault Detection in sensors was developed by simulating faults in the sensors database using values of 5%, 10%, 15% and 20% in these sensors database. The results obtained using GMDH algorithm in the choice of the best input variables to the ANNs were better than that using only ANNs, thus making possible the use of these methods in the implementation of a new Monitoring and Fault Detection methodology applied in sensors. (author)

  12. Advances in Artificial Neural NetworksMethodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  13. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  14. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  15. Study of input variables in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2013-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)

  16. Development of the fire PSA methodology and the fire analysis computer code system

    International Nuclear Information System (INIS)

    Katsunori, Ogura; Tomomichi, Ito; Tsuyoshi, Uchida; Yusuke, Kasagawa

    2009-01-01

    Fire PSA methodology has been developed and was applied to NPPs in Japan for power operation and LPSD states. CDFs of preliminary fire PSA for power operation were the higher than that of internal events. Fire propagation analysis code system (CFAST/FDS Network) was being developed and verified thru OECD-PRISME Project. Extension of the scope for LPSD state is planned to figure out the risk level. In order to figure out the fire risk level precisely, the enhancement of the methodology is planned. Verification and validation of phenomenological fire propagation analysis code (CFAST/FDS Network) in the context of Fire PSA. Enhancement of the methodology such as an application of 'Electric Circuit Analysis' in NUREG/CR-6850 and related tests in order to quantify the hot-short effect precisely. Development of seismic-induced fire PSA method being integration of existing seismic PSA and fire PSA methods is ongoing. Fire PSA will be applied to review the validity of fire prevention and mitigation measures

  17. Applying a life cycle decision methodology to Fernald waste management alternatives

    International Nuclear Information System (INIS)

    Yuracko, K.L.; Gresalfi, M.; Yerace, P.

    1996-01-01

    During the past five years, a number of U.S. Department of Energy (DOE) funded efforts have demonstrated the technical efficacy of converting various forms of radioactive scrap metal (RSM) into useable products. From the development of large accelerator shielding blocks, to the construction of low-level waste containers, technology has been applied to this fabrication process in a safe and stakeholder supported manner. The potential health and safety risks to both workers and the public have been addressed. The question remains: can products be fabricated from RSM in a cost efficient and market competitive manner? This paper presents a methodology for use within DOE to evaluate the costs and benefits of recycling and reusing some RSM, rather than disposing of this RSM in an approved burial site. This life cycle decision methodology, developed by both the Oak Ridge National Laboratory (ORNL) and DOE Fernald, is the focus of the following analysis

  18. Leveraging the Methodological Affordances of Facebook: Social Networking Strategies in Longitudinal Writing Research

    Science.gov (United States)

    Sheffield, Jenna Pack; Kimme Hea, Amy C.

    2016-01-01

    While composition studies researchers have examined the ways social media are impacting our lives inside and outside of the classroom, less attention has been given to the ways in which social media--specifically Social Network Sites (SNSs)--may enhance our own research methods and methodologies by helping to combat research participant attrition…

  19. A bio-inspired methodology of identifying influential nodes in complex networks.

    Directory of Open Access Journals (Sweden)

    Cai Gao

    Full Text Available How to identify influential nodes is a key issue in complex networks. The degree centrality is simple, but is incapable to reflect the global characteristics of networks. Betweenness centrality and closeness centrality do not consider the location of nodes in the networks, and semi-local centrality, leaderRank and pageRank approaches can be only applied in unweighted networks. In this paper, a bio-inspired centrality measure model is proposed, which combines the Physarum centrality with the K-shell index obtained by K-shell decomposition analysis, to identify influential nodes in weighted networks. Then, we use the Susceptible-Infected (SI model to evaluate the performance. Examples and applications are given to demonstrate the adaptivity and efficiency of the proposed method. In addition, the results are compared with existing methods.

  20. Auditory Hallucinations and the Brain’s Resting-State Networks: Findings and Methodological Observations

    Science.gov (United States)

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M.; Horga, Guillermo; Margulies, Daniel S.; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M.; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-01-01

    In recent years, there has been increasing interest in the potential for alterations to the brain’s resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. PMID:27280452

  1. CellNet: Network Biology Applied to Stem Cell Engineering

    Science.gov (United States)

    Cahan, Patrick; Li, Hu; Morris, Samantha A.; da Rocha, Edroaldo Lummertz; Daley, George Q.; Collins, James J.

    2014-01-01

    SUMMARY Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population, and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering. PMID:25126793

  2. CellNet: network biology applied to stem cell engineering.

    Science.gov (United States)

    Cahan, Patrick; Li, Hu; Morris, Samantha A; Lummertz da Rocha, Edroaldo; Daley, George Q; Collins, James J

    2014-08-14

    Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Simulation and Optimization Methodologies for Military Transportation Network Routing and Scheduling and for Military Medical Services

    National Research Council Canada - National Science Library

    Rodin, Ervin Y

    2005-01-01

    The purpose of this present research was to develop a generic model and methodology for analyzing and optimizing large-scale air transportation networks including both their routing and their scheduling...

  4. Artificial intelligence methodologies applied to quality control of the positioning services offered by the Red Andaluza de Posicionamiento (RAP network

    Directory of Open Access Journals (Sweden)

    Antonio José Gil

    2012-12-01

    Full Text Available On April 26, 2012, Elena Giménez de Ory defend-ed her Ph.D. thesis at University of Jaén, entitled: “Robust methodologies applied to quality control of the positioning services offered by the Red Andaluza de Posicionamiento (RAP network”. Elena Giménez de Ory defended her dissertation in a publicly open presentation held in the Higher Polytechnic School at the University of Jaén, and was able to comment on every question raised by her thesis committee and the audience. The thesis was supervised by her advisor, Prof. Antonio J. Gil Cruz, and the rest of his thesis committee, Prof. Manuel Sánchez de la Orden, Dr. Antonio Miguel Ruiz Armenteros and Dr. Gracia Rodríguez Caderot. The thesis has been read and approved by his thesis committee, receiving the highest rating. All of them were present at the presentation.

  5. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  6. The application of network teaching in applied optics teaching

    Science.gov (United States)

    Zhao, Huifu; Piao, Mingxu; Li, Lin; Liu, Dongmei

    2017-08-01

    Network technology has become a creative tool of changing human productivity, the rapid development of it has brought profound changes to our learning, working and life. Network technology has many advantages such as rich contents, various forms, convenient retrieval, timely communication and efficient combination of resources. Network information resources have become the new education resources, get more and more application in the education, has now become the teaching and learning tools. Network teaching enriches the teaching contents, changes teaching process from the traditional knowledge explanation into the new teaching process by establishing situation, independence and cooperation in the network technology platform. The teacher's role has shifted from teaching in classroom to how to guide students to learn better. Network environment only provides a good platform for the teaching, we can get a better teaching effect only by constantly improve the teaching content. Changchun university of science and technology introduced a BB teaching platform, on the platform, the whole optical classroom teaching and the classroom teaching can be improved. Teachers make assignments online, students learn independently offline or the group learned cooperatively, this expands the time and space of teaching. Teachers use hypertext form related knowledge of applied optics, rich cases and learning resources, set up the network interactive platform, homework submission system, message board, etc. The teaching platform simulated the learning interest of students and strengthens the interaction in the teaching.

  7. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  8. Unattended Monitoring System Design Methodology

    International Nuclear Information System (INIS)

    Drayer, D.D.; DeLand, S.M.; Harmon, C.D.; Matter, J.C.; Martinez, R.L.; Smith, J.D.

    1999-01-01

    A methodology for designing Unattended Monitoring Systems starting at a systems level has been developed at Sandia National Laboratories. This proven methodology provides a template that describes the process for selecting and applying appropriate technologies to meet unattended system requirements, as well as providing a framework for development of both training courses and workshops associated with unattended monitoring. The design and implementation of unattended monitoring systems is generally intended to respond to some form of policy based requirements resulting from international agreements or domestic regulations. Once the monitoring requirements are established, a review of the associated process and its related facilities enables identification of strategic monitoring locations and development of a conceptual system design. The detailed design effort results in the definition of detection components as well as the supporting communications network and data management scheme. The data analyses then enables a coherent display of the knowledge generated during the monitoring effort. The resultant knowledge is then compared to the original system objectives to ensure that the design adequately addresses the fundamental principles stated in the policy agreements. Implementation of this design methodology will ensure that comprehensive unattended monitoring system designs provide appropriate answers to those critical questions imposed by specific agreements or regulations. This paper describes the main features of the methodology and discusses how it can be applied in real world situations

  9. Applying policy network theory to policy-making in China: the case of urban health insurance reform.

    Science.gov (United States)

    Zheng, Haitao; de Jong, Martin; Koppenjan, Joop

    2010-01-01

    In this article, we explore whether policy network theory can be applied in the People's Republic of China (PRC). We carried out a literature review of how this approach has already been dealt with in the Chinese policy sciences thus far. We then present the key concepts and research approach in policy networks theory in the Western literature and try these on a Chinese case to see the fit. We follow this with a description and analysis of the policy-making process regarding the health insurance reform in China from 1998 until the present. Based on this case study, we argue that this body of theory is useful to describe and explain policy-making processes in the Chinese context. However, limitations in the generic model appear in capturing the fundamentally different political and administrative systems, crucially different cultural values in the applicability of some research methods common in Western countries. Finally, we address which political and cultural aspects turn out to be different in the PRC and how they affect methodological and practical problems that PRC researchers will encounter when studying decision-making processes.

  10. Degradation of ticarcillin by subcritial water oxidation method: Application of response surface methodology and artificial neural network modeling.

    Science.gov (United States)

    Yabalak, Erdal

    2018-05-18

    This study was performed to investigate the mineralization of ticarcillin in the artificially prepared aqueous solution presenting ticarcillin contaminated waters, which constitute a serious problem for human health. 81.99% of total organic carbon removal, 79.65% of chemical oxygen demand removal, and 94.35% of ticarcillin removal were achieved by using eco-friendly, time-saving, powerful and easy-applying, subcritical water oxidation method in the presence of a safe-to-use oxidizing agent, hydrogen peroxide. Central composite design, which belongs to the response surface methodology, was applied to design the degradation experiments, to optimize the methods, to evaluate the effects of the system variables, namely, temperature, hydrogen peroxide concentration, and treatment time, on the responses. In addition, theoretical equations were proposed in each removal processes. ANOVA tests were utilized to evaluate the reliability of the performed models. F values of 245.79, 88.74, and 48.22 were found for total organic carbon removal, chemical oxygen demand removal, and ticarcillin removal, respectively. Moreover, artificial neural network modeling was applied to estimate the response in each case and its prediction and optimizing performance was statistically examined and compared to the performance of central composite design.

  11. Pattern recognition and data mining software based on artificial neural networks applied to proton transfer in aqueous environments

    International Nuclear Information System (INIS)

    Tahat Amani; Marti Jordi; Khwaldeh Ali; Tahat Kaher

    2014-01-01

    In computational physics proton transfer phenomena could be viewed as pattern classification problems based on a set of input features allowing classification of the proton motion into two categories: transfer ‘occurred’ and transfer ‘not occurred’. The goal of this paper is to evaluate the use of artificial neural networks in the classification of proton transfer events, based on the feed-forward back propagation neural network, used as a classifier to distinguish between the two transfer cases. In this paper, we use a new developed data mining and pattern recognition tool for automating, controlling, and drawing charts of the output data of an Empirical Valence Bond existing code. The study analyzes the need for pattern recognition in aqueous proton transfer processes and how the learning approach in error back propagation (multilayer perceptron algorithms) could be satisfactorily employed in the present case. We present a tool for pattern recognition and validate the code including a real physical case study. The results of applying the artificial neural networks methodology to crowd patterns based upon selected physical properties (e.g., temperature, density) show the abilities of the network to learn proton transfer patterns corresponding to properties of the aqueous environments, which is in turn proved to be fully compatible with previous proton transfer studies. (condensed matter: structural, mechanical, and thermal properties)

  12. Earthquake Complex Network applied along the Chilean Subduction Zone.

    Science.gov (United States)

    Martin, F.; Pasten, D.; Comte, D.

    2017-12-01

    In recent years the earthquake complex networks have been used as a useful tool to describe and characterize the behavior of seismicity. The earthquake complex network is built in space, dividing the three dimensional space in cubic cells. If the cubic cell contains a hypocenter, we call this cell like a node. The connections between nodes follows the time sequence of the occurrence of the seismic events. In this sense, we have a spatio-temporal configuration of a specific region using the seismicity in that zone. In this work, we are applying complex networks to characterize the subduction zone along the coast of Chile using two networks: a directed and an undirected network. The directed network takes in consideration the time-direction of the connections, that is very important for the connectivity of the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out from the node i and we add the self-connections (if two seismic events occurred successive in time in the same cubic cell, we have a self-connection). The undirected network is the result of remove the direction of the connections and the self-connections from the directed network. These two networks were building using seismic data events recorded by CSN (Chilean Seismological Center) in Chile. This analysis includes the last largest earthquakes occurred in Iquique (April 2014) and in Illapel (September 2015). The result for the directed network shows a change in the value of the critical exponent along the Chilean coast. The result for the undirected network shows a small-world behavior without important changes in the topology of the network. Therefore, the complex network analysis shows a new form to characterize the Chilean subduction zone with a simple method that could be compared with another methods to obtain more details about the behavior of the seismicity in this region.

  13. Partial Information Community Detection in a Multilayer Network

    Science.gov (United States)

    2016-06-01

    26 3 Methodology 33 3.1 Topology of the Noordin Top Terrorist Network . . . . . . . . . . . . 33 3.2 Partial Information... Topology of Synthetic Network. . . . . . . . . . . . . . . . . . . 69 4.4 Four Discovery Algorithms Discovering Red Vertices in a Synthetic Network 72 4.5...without their expertise and analysis. I have been lucky enough to have learned from the wonderful faculty of Applied Mathe - matics Department at the Naval

  14. Assessment of network perturbation amplitudes by applying high-throughput data to causal biological networks

    Directory of Open Access Journals (Sweden)

    Martin Florian

    2012-05-01

    Full Text Available Abstract Background High-throughput measurement technologies produce data sets that have the potential to elucidate the biological impact of disease, drug treatment, and environmental agents on humans. The scientific community faces an ongoing challenge in the analysis of these rich data sources to more accurately characterize biological processes that have been perturbed at the mechanistic level. Here, a new approach is built on previous methodologies in which high-throughput data was interpreted using prior biological knowledge of cause and effect relationships. These relationships are structured into network models that describe specific biological processes, such as inflammatory signaling or cell cycle progression. This enables quantitative assessment of network perturbation in response to a given stimulus. Results Four complementary methods were devised to quantify treatment-induced activity changes in processes described by network models. In addition, companion statistics were developed to qualify significance and specificity of the results. This approach is called Network Perturbation Amplitude (NPA scoring because the amplitudes of treatment-induced perturbations are computed for biological network models. The NPA methods were tested on two transcriptomic data sets: normal human bronchial epithelial (NHBE cells treated with the pro-inflammatory signaling mediator TNFα, and HCT116 colon cancer cells treated with the CDK cell cycle inhibitor R547. Each data set was scored against network models representing different aspects of inflammatory signaling and cell cycle progression, and these scores were compared with independent measures of pathway activity in NHBE cells to verify the approach. The NPA scoring method successfully quantified the amplitude of TNFα-induced perturbation for each network model when compared against NF-κB nuclear localization and cell number. In addition, the degree and specificity to which CDK

  15. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  16. Methodological aspects of network assets accounting

    Directory of Open Access Journals (Sweden)

    Yuhimenko-Nazaruk I.A.

    2017-08-01

    Full Text Available The necessity of using innovative tools of processing and representation of information about network assets is substantiated. The suggestions for displaying network assets in accounts are presented. The main reasons for the need to display the network assets in the financial statements of all members of the network structure (the economic essence of network assets as the object of accounting; the non-additional model for the formation of the value of network assets; the internetworking mechanism for the formation of the value of network assets are identified. The stages of accounting valuation of network assets are allocated and substantiated. The analytical table for estimating the value of network assets and additional network capital in accounting is developed. The order of additional network capital reflection in accounting is developed. The method of revaluation of network assets in accounting in the broad sense is revealed. The order of accounting of network assets with increasing or decreasing the number of participants in the network structure is determined.

  17. Energy retrofit of commercial buildings. Case study and applied methodology

    Energy Technology Data Exchange (ETDEWEB)

    Aste, N.; Del Pero, C. [Department of Building Environment Science and Technology (BEST), Politecnico di Milano, Via Bonardi 3, 20133 Milan (Italy)

    2013-05-15

    Commercial buildings are responsible for a significant share of the energy requirements of European Union countries. Related consumptions due to heating, cooling, and lighting appear, in most cases, very high and expensive. Since the real estate is renewed with a very small percentage each year and current trends suggest reusing the old structures, strategies for improving energy efficiency and sustainability should focus not only on new buildings, but also and especially on existing ones. Architectural renovation of existing buildings could provide an opportunity to enhance their energy efficiency, by working on the improvement of envelopes and energy supply systems. It has also to be noted that the measures aimed to improve the energy performance of buildings should pay particular attention to the cost-effectiveness of the interventions. In general, there is a lack of well-established methods for retrofitting, but if a case study achieves effective results, the adopted strategies and methodologies can be successfully replicated for similar kinds of buildings. In this paper, an iterative methodology for energy retrofit of commercial buildings is presented, together with a specific application on an existing office building. The case study is particularly significant as it is placed in an urban climatic context characterized by cold winters and hot summers; consequently, HVAC energy consumption is considerable throughout the year. The analysis and simulations of energy performance before and after the intervention, along with measured data on real energy performance, demonstrate the validity of the applied approach. The specifically developed design and refurbishment methodology, presented in this work, could be also assumed as a reference in similar operations.

  18. Applied methodology for replacement pipe arcs in integral pipelines TE 'Oslomej'

    Directory of Open Access Journals (Sweden)

    Temelkoska Bratica K.

    2016-01-01

    Full Text Available The integral pipelines in thermal power plants present a linear spatial bearing construction with high operating parameters, complex static and dynamic load. The integral pipelines along its entire length are hanging on construction spring hangers from the boiler building, where the boiler is placed, next to the machine hall where the turbine is placed. Therefore, it is important to monitor the condition and to remove any possible defects from the applied methods. This paper describes the methodology of replacement of the pipe arch on one of the integral pipelines-the line for hot superheated steam. In addition, in this paper are given the method methods that led to this methodology for testing and evaluation of the condition of the pipe arch material that had been in exploitation and the new pipe arch that will be embedded. Furthermore the approach, the technology of replacement, anchoring of the steam line, technology of welding etc., as well as the preparation of the final design of constructed condition are also covered in this paper.

  19. RDANN a new methodology to solve the neutron spectra unfolding problem

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    The optimization processes known as Taguchi method and DOE methodology are applied to the design, training and testing of Artificial Neural Networks in the neutron spectrometry field, which offer potential benefits in the evaluation of the behavior of the net as well as the ability to examine the interaction of the weights and neurons inside the same one. In this work, the Robust Design of Artificial Neural Networks methodology is used to solve the neutron spectra unfolding problem, designing, training and testing an ANN using a set of 187 neutron spectra compiled by the International Atomic Energy Agency, to obtain the better neutron spectra unfolded from the Bonner spheres spectrometer's count rates. (Author)

  20. RDANN a new methodology to solve the neutron spectra unfolding problem

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2006-01-01

    The optimization processes known as Taguchi method and DOE methodology are applied to the design, training and testing of Artificial Neural Networks in the neutron spectrometry field, which offer potential benefits in the evaluation of the behavior of the net as well as the ability to examine the interaction of the weights and neurons inside the same one. In this work, the Robust Design of Artificial Neural Networks methodology is used to solve the neutron spectra unfolding problem, designing, training and testing an ANN using a set of 187 neutron spectra compiled by the International Atomic Energy Agency, to obtain the better neutron spectra unfolded from the Bonner spheres spectrometer's count rates. (Author)

  1. Energy consumption control automation using Artificial Neural Networks and adaptive algorithms: Proposal of a new methodology and case study

    International Nuclear Information System (INIS)

    Benedetti, Miriam; Cesarotti, Vittorio; Introna, Vito; Serranti, Jacopo

    2016-01-01

    Highlights: • A methodology to enable energy consumption control automation is proposed. • The methodology is based on the use of Artificial Neural Networks. • A method to control the accuracy of the model over time is proposed. • Two methods to enable automatic retraining of the network are proposed. • Retraining methods are evaluated on their accuracy over time. - Abstract: Energy consumption control in energy intensive companies is always more considered as a critical activity to continuously improve energy performance. It undoubtedly requires a huge effort in data gathering and analysis, and the amount of these data together with the scarceness of human resources devoted to Energy Management activities who could maintain and update the analyses’ output are often the main barriers to its diffusion in companies. Advanced tools such as software based on machine learning techniques are therefore the key to overcome these barriers and allow an easy but accurate control. This type of systems is able to solve complex problems obtaining reliable results over time, but not to understand when the reliability of the results is declining (a common situation considering energy using systems, often undergoing structural changes) and to automatically adapt itself using a limited amount of training data, so that a completely automatic application is not yet available and the automatic energy consumption control using intelligent systems is still a challenge. This paper presents a whole new approach to energy consumption control, proposing a methodology based on Artificial Neural Networks (ANNs) and aimed at creating an automatic energy consumption control system. First of all, three different structures of neural networks are proposed and trained using a huge amount of data. Three different performance indicators are then used to identify the most suitable structure, which is implemented to create an energy consumption control tool. In addition, considering that

  2. An applied methodology for assessment of the sustainability of biomass district heating systems

    Science.gov (United States)

    Vallios, Ioannis; Tsoutsos, Theocharis; Papadakis, George

    2016-03-01

    In order to maximise the share of biomass in the energy supplying system, the designers should adopt the appropriate changes to the traditional systems and become more familiar with the design details of the biomass heating systems. The aim of this study is to present the development of methodology and its associated implementation in software that is useful for the design of biomass thermal conversion systems linked with district heating (DH) systems, taking into consideration the types of building structures and urban settlement layout around the plant. The methodology is based on a completely parametric logic, providing an impact assessment of variations in one or more technical and/or economic parameters and thus, facilitating a quick conclusion on the viability of this particular energy system. The essential energy parameters are presented and discussed for the design of biomass power and heat production system which are in connection with DH network, as well as for its environmental and economic evaluation (i.e. selectivity and viability of the relevant investment). Emphasis has been placed upon the technical parameters of biomass logistics, energy system's design, the economic details of the selected technology (integrated cogeneration combined cycle or direct combustion boiler), the DH network and peripheral equipment (thermal substations) and the greenhouse gas emissions. The purpose of this implementation is the assessment of the pertinent investment financial viability taking into account the available biomass feedstock, the economical and market conditions, and the capital/operating costs. As long as biomass resources (forest wood and cultivation products) are available and close to the settlement, disposal and transportation costs of biomass, remain low assuring the sustainability of such energy systems.

  3. Vein matching using artificial neural network in vein authentication systems

    Science.gov (United States)

    Noori Hoshyar, Azadeh; Sulaiman, Riza

    2011-10-01

    Personal identification technology as security systems is developing rapidly. Traditional authentication modes like key; password; card are not safe enough because they could be stolen or easily forgotten. Biometric as developed technology has been applied to a wide range of systems. According to different researchers, vein biometric is a good candidate among other biometric traits such as fingerprint, hand geometry, voice, DNA and etc for authentication systems. Vein authentication systems can be designed by different methodologies. All the methodologies consist of matching stage which is too important for final verification of the system. Neural Network is an effective methodology for matching and recognizing individuals in authentication systems. Therefore, this paper explains and implements the Neural Network methodology for finger vein authentication system. Neural Network is trained in Matlab to match the vein features of authentication system. The Network simulation shows the quality of matching as 95% which is a good performance for authentication system matching.

  4. Author-paper affiliation network architecture influences the methodological quality of systematic reviews and meta-analyses of psoriasis.

    Directory of Open Access Journals (Sweden)

    Juan Luis Sanz-Cabanillas

    Full Text Available Moderate-to-severe psoriasis is associated with significant comorbidity, an impaired quality of life, and increased medical costs, including those associated with treatments. Systematic reviews (SRs and meta-analyses (MAs of randomized clinical trials are considered two of the best approaches to the summarization of high-quality evidence. However, methodological bias can reduce the validity of conclusions from these types of studies and subsequently impair the quality of decision making. As co-authorship is among the most well-documented forms of research collaboration, the present study aimed to explore whether authors' collaboration methods might influence the methodological quality of SRs and MAs of psoriasis. Methodological quality was assessed by two raters who extracted information from full articles. After calculating total and per-item Assessment of Multiple Systematic Reviews (AMSTAR scores, reviews were classified as low (0-4, medium (5-8, or high (9-11 quality. Article metadata and journal-related bibliometric indices were also obtained. A total of 741 authors from 520 different institutions and 32 countries published 220 reviews that were classified as high (17.2%, moderate (55%, or low (27.7% methodological quality. The high methodological quality subnetwork was larger but had a lower connection density than the low and moderate methodological quality subnetworks; specifically, the former contained relatively fewer nodes (authors and reviews, reviews by authors, and collaborators per author. Furthermore, the high methodological quality subnetwork was highly compartmentalized, with several modules representing few poorly interconnected communities. In conclusion, structural differences in author-paper affiliation network may influence the methodological quality of SRs and MAs on psoriasis. As the author-paper affiliation network structure affects study quality in this research field, authors who maintain an appropriate balance

  5. Adding value in oil and gas by applying decision analysis methodologies: case history

    Energy Technology Data Exchange (ETDEWEB)

    Marot, Nicolas [Petro Andina Resources Inc., Alberta (Canada); Francese, Gaston [Tandem Decision Solutions, Buenos Aires (Argentina)

    2008-07-01

    Petro Andina Resources Ltd. together with Tandem Decision Solutions developed a strategic long range plan applying decision analysis methodology. The objective was to build a robust and fully integrated strategic plan that accomplishes company growth goals to set the strategic directions for the long range. The stochastic methodology and the Integrated Decision Management (IDM{sup TM}) staged approach allowed the company to visualize the associated value and risk of the different strategies while achieving organizational alignment, clarity of action and confidence in the path forward. A decision team involving jointly PAR representatives and Tandem consultants was established to carry out this four month project. Discovery and framing sessions allow the team to disrupt the status quo, discuss near and far reaching ideas and gather the building blocks from which creative strategic alternatives were developed. A comprehensive stochastic valuation model was developed to assess the potential value of each strategy applying simulation tools, sensitivity analysis tools and contingency planning techniques. Final insights and results have been used to populate the final strategic plan presented to the company board providing confidence to the team, assuring that the work embodies the best available ideas, data and expertise, and that the proposed strategy was ready to be elaborated into an optimized course of action. (author)

  6. New designing of E-Learning systems with using network learning

    OpenAIRE

    Malayeri, Amin Daneshmand; Abdollahi, Jalal

    2010-01-01

    One of the most applied learning in virtual spaces is using E-Learning systems. Some E-Learning methodologies has been introduced, but the main subject is the most positive feedback from E-Learning systems. In this paper, we introduce a new methodology of E-Learning systems entitle "Network Learning" with review of another aspects of E-Learning systems. Also, we present benefits and advantages of using these systems in educating and fast learning programs. Network Learning can be programmable...

  7. Getting Real: A Naturalistic Methodology for Using Smartphones to Collect Mediated Communications

    Directory of Open Access Journals (Sweden)

    Chad C. Tossell

    2012-01-01

    Full Text Available This paper contributes an intentionally naturalistic methodology using smartphone logging technology to study communications in the wild. Smartphone logging can provide tremendous access to communications data from real environments. However, researchers must consider how it is employed to preserve naturalistic behaviors. Nine considerations are presented to this end. We also provide a description of a naturalistic logging approach that has been applied successfully to collecting mediated communications from iPhones. The methodology was designed to intentionally decrease reactivity and resulted in data that were more accurate than self-reports. Example analyses are also provided to show how data collected can be analyzed to establish empirical patterns and identify user differences. Smartphone logging technologies offer flexible capabilities to enhance access to real communications data, but methodologies employing these techniques must be designed appropriately to avoid provoking naturally occurring behaviors. Functionally, this methodology can be applied to establish empirical patterns and test specific hypotheses within the field of HCI research. Topically, this methodology can be applied to domains interested in understanding mediated communications such as mobile content and systems design, teamwork, and social networks.

  8. Methodological exploratory study applied to occupational epidemiology

    Energy Technology Data Exchange (ETDEWEB)

    Carneiro, Janete C.G. Gaburo; Vasques, MOnica Heloisa B.; Fontinele, Ricardo S.; Sordi, Gian Maria A. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)]. E-mail: janetegc@ipen.br

    2007-07-01

    The utilization of epidemiologic methods and techniques has been object of practical experimentation and theoretical-methodological reflection in health planning and programming process. Occupational Epidemiology is the study of the causes and prevention of diseases and injuries from exposition and risks in the work environment. In this context, there is no intention to deplete such a complex theme but to deal with basic concepts of Occupational Epidemiology, presenting the main characteristics of the analysis methods used in epidemiology, as investigate the possible determinants of exposition (chemical, physical and biological agents). For this study, the social-demographic profile of the IPEN-CNEN/SP work force was used. The knowledge of this reference population composition is based on sex, age, educational level, marital status and different occupations, aiming to know the relation between the health aggravating factors and these variables. The methodology used refers to a non-experimental research based on a theoretical methodological practice. The work performed has an exploratory character, aiming a later survey of indicators in the health area in order to analyze possible correlations related to epidemiologic issues. (author)

  9. Methodological exploratory study applied to occupational epidemiology

    International Nuclear Information System (INIS)

    Carneiro, Janete C.G. Gaburo; Vasques, MOnica Heloisa B.; Fontinele, Ricardo S.; Sordi, Gian Maria A.

    2007-01-01

    The utilization of epidemiologic methods and techniques has been object of practical experimentation and theoretical-methodological reflection in health planning and programming process. Occupational Epidemiology is the study of the causes and prevention of diseases and injuries from exposition and risks in the work environment. In this context, there is no intention to deplete such a complex theme but to deal with basic concepts of Occupational Epidemiology, presenting the main characteristics of the analysis methods used in epidemiology, as investigate the possible determinants of exposition (chemical, physical and biological agents). For this study, the social-demographic profile of the IPEN-CNEN/SP work force was used. The knowledge of this reference population composition is based on sex, age, educational level, marital status and different occupations, aiming to know the relation between the health aggravating factors and these variables. The methodology used refers to a non-experimental research based on a theoretical methodological practice. The work performed has an exploratory character, aiming a later survey of indicators in the health area in order to analyze possible correlations related to epidemiologic issues. (author)

  10. Synthesis of methodology development and case studies

    OpenAIRE

    Roetter, R.P.; Keulen, van, H.; Laar, van, H.H.

    2000-01-01

    The .Systems Research Network for Ecoregional Land Use Planning in Support of Natural Resource Management in Tropical Asia (SysNet). was financed under the Ecoregional Fund, administered by the International Service for National Agricultural Research (ISNAR). The objective of the project was to develop and evaluate methodologies and tools for land use analysis, and apply them at the subnational scale to support agricultural and environmental policy formulation. In the framework of this projec...

  11. Optimized planning methodologies of ASON implementation

    Science.gov (United States)

    Zhou, Michael M.; Tamil, Lakshman S.

    2005-02-01

    Advanced network planning concerns effective network-resource allocation for dynamic and open business environment. Planning methodologies of ASON implementation based on qualitative analysis and mathematical modeling are presented in this paper. The methodology includes method of rationalizing technology and architecture, building network and nodal models, and developing dynamic programming for multi-period deployment. The multi-layered nodal architecture proposed here can accommodate various nodal configurations for a multi-plane optical network and the network modeling presented here computes the required network elements for optimizing resource allocation.

  12. Development of a cost efficient methodology to perform allocation of flammable and toxic gas detectors applying CFD tools

    Energy Technology Data Exchange (ETDEWEB)

    Storch, Rafael Brod; Rocha, Gean Felipe Almeida [Det Norske Veritas (DNV), Rio de Janeiro, RJ (Brazil); Nalvarte, Gladys Augusta Zevallos [Det Norske Veritas (DNV), Novik (Norway)

    2012-07-01

    This paper is aimed to present a computational procedure for flammable and toxic gas detector allocation and quantification developed by DNV. The proposed methodology applies Computational Fluid Dynamics (CFD) simulations as well as operational and safety characteristics of the analyzed region to assess the optimal number of toxic and flammable gas detectors and their optimal location. A probabilistic approach is also used when applying the DNV software ThorEXPRESSLite, following NORSOK Z013 Annex G and presented in HUSER et al. 2000 and HUSER et al. 2001, when the flammable gas detectors are assessed. A DNV developed program, DetLoc, is used to run in an iterative way the procedure described above leading to an automatic calculation of the gas detectors location and number. The main advantage of the methodology presented above is the independence of human interaction in the gas detector allocation leading to a more precise and free of human judgment allocation. Thus, a reproducible allocation is generated when comparing several different analyses and a global criteria appliance is guaranteed through different regions in the same project. A case study is presented applying the proposed methodology. (author)

  13. Three-dimensional stochastic adjustment of volcano geodetic network in Arenal volcano, Costa Rica

    Science.gov (United States)

    Muller, C.; van der Laat, R.; Cattin, P.-H.; Del Potro, R.

    2009-04-01

    Volcano geodetic networks are a key instrument to understanding magmatic processes and, thus, forecasting potentially hazardous activity. These networks are extensively used on volcanoes worldwide and generally comprise a number of different traditional and modern geodetic surveying techniques such as levelling, distances, triangulation and GNSS. However, in most cases, data from the different methodologies are surveyed, adjusted and analysed independently. Experience shows that the problem with this procedure is the mismatch between the excellent correlation of position values within a single technique and the low cross-correlation of such values within different techniques or when the same network is surveyed shortly after using the same technique. Moreover one different independent network for each geodetic surveying technique strongly increase logistics and thus the cost of each measurement campaign. It is therefore important to develop geodetic networks which combine the different geodetic surveying technique, and to adjust geodetic data together in order to better quantify the uncertainties associated to the measured displacements. In order to overcome the lack of inter-methodology data integration, the Geomatic Institute of the University of Applied Sciences of Western Switzerland (HEIG-VD) has developed a methodology which uses a 3D stochastic adjustment software of redundant geodetic networks, TRINET+. The methodology consists of using each geodetic measurement technique for its strengths relative to other methodologies. Also, the combination of the measurements in a single network allows more cost-effective surveying. The geodetic data are thereafter adjusted and analysed in the same referential frame. The adjustment methodology is based on the least mean square method and links the data with the geometry. Trinet+ also allows to run a priori simulations of the network, hence testing the quality and resolution to be expected for a determined network even

  14. A Methodology to Develop Entrepreneurial Networks: The Tech Ecosystem of Six African Cities

    Science.gov (United States)

    2014-11-01

    Information Center. Greve, A. and Salaff, J. W. (2003), Social Networks and Entrepreneurship . Entrepreneurship Theory and Practice , 28: 1–22. doi...our methodology, the team quickly realized that it would have to focus on a fairly narrow sub-set of entrepreneurship . Based on relationships we have...Social Capital: A Theory of Structure and Action. Cambridge University Press, New York 2001. Liu, Y., Slotine, J., and Barabasi, A. (2011

  15. Artificial neural network applying for justification of tractors undercarriages parameters

    Directory of Open Access Journals (Sweden)

    V. A. Kuz’Min

    2017-01-01

    Full Text Available One of the most important properties that determine undercarriage layout on design stage is the soil compaction effect. Existing domestic standards of undercarriages impact to soil do not meet modern agricultural requirements completely. The authors justify the need for analysis of traction and transportation machines travel systems and recommendations for these parameters applied to machines that are on design or modernization stage. The database of crawler agricultural tractors particularly in such parameters as traction class and basic operational weight, engine power rating, average ground pressure, square of track basic branch surface area was modeled. Meanwhile the considered machines were divided into two groups by producing countries: Europe/North America and Russian Federation/CIS. The main graphical dependences for every group of machines are plotted, and the conforming analytical dependences within the ranges with greatest concentration of machines are generated. To make the procedure of obtaining parameters of the soil panning by tractors easier it is expedient to use the program tool - artificial neural network (or perceptron. It is necessary to apply to the solution of this task multilayered perceptron - neutron network of direct distribution of signals (without feedback. To carry out the analysis of parameters of running systems taking into account parameters of the soil panning by them and to recommend the choice of these parameters for newly created machines. The program code of artificial neural network is developed. On the basis of the created base of tractors the artificial neural network was created and tested. Accumulated error was not more than 5 percent. These data indicate the results accuracy and tool reliability. It is possible by operating initial design-data base and using the designed artificial neural network to define missing parameters.

  16. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  17. Actor-Network Theory and methodology: Just what does it mean to say that nonhumans have agency?

    Science.gov (United States)

    Sayes, Edwin

    2014-02-01

    Actor-Network Theory is a controversial social theory. In no respect is this more so than the role it 'gives' to nonhumans: nonhumans have agency, as Latour provocatively puts it. This article aims to interrogate the multiple layers of this declaration to understand what it means to assert with Actor-Network Theory that nonhumans exercise agency. The article surveys a wide corpus of statements by the position's leading figures and emphasizes the wider methodological framework in which these statements are embedded. With this work done, readers will then be better placed to reject or accept the Actor-Network position - understanding more precisely what exactly it is at stake in this decision.

  18. A proposal to order the neutron data set in neutron spectrometry using the RDANN methodology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    A new proposal to order a neutron data set in the design process of artificial neural networks in the neutron spectrometry field is presented for first time. The robust design of artificial neural networks methodology was applied to 187 neutron spectra data set compiled by the International Atomic Energy Agency. Four cases of grouping the neutron spectra were considered and around 1000 different neural networks were designed, trained and tested with different net topologies each one. After carrying out the systematic methodology for all the cases, it was determined that the best neural network topology that produced the best reconstructed neutron spectra was case with 187 neutron spectra data set, determining that the best neural network topology is: 7 entrance neurons, 14 neurons in a hidden layer and 31 neurons in the exit layer, with a value of 0.1 in the learning rate and 0.1 in the moment. (Author)

  19. A proposal to order the neutron data set in neutron spectrometry using the RDANN methodology

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2006-01-01

    A new proposal to order a neutron data set in the design process of artificial neural networks in the neutron spectrometry field is presented for first time. The robust design of artificial neural networks methodology was applied to 187 neutron spectra data set compiled by the International Atomic Energy Agency. Four cases of grouping the neutron spectra were considered and around 1000 different neural networks were designed, trained and tested with different net topologies each one. After carrying out the systematic methodology for all the cases, it was determined that the best neural network topology that produced the best reconstructed neutron spectra was case with 187 neutron spectra data set, determining that the best neural network topology is: 7 entrance neurons, 14 neurons in a hidden layer and 31 neurons in the exit layer, with a value of 0.1 in the learning rate and 0.1 in the moment. (Author)

  20. Forecasting Baltic Dirty Tanker Index by Applying Wavelet Neural Networks

    DEFF Research Database (Denmark)

    Fan, Shuangrui; JI, TINGYUN; Bergqvist, Rickard

    2013-01-01

    modeling techniques used in freight rate forecasting. At the same time research in shipping index forecasting e.g. BDTI applying artificial intelligent techniques is scarce. This analyses the possibilities to forecast the BDTI by applying Wavelet Neural Networks (WNN). Firstly, the characteristics...... of traditional and artificial intelligent forecasting techniques are discussed and rationales for choosing WNN are explained. Secondly, the components and features of BDTI will be explicated. After that, the authors delve the determinants and influencing factors behind fluctuations of the BDTI in order to set...

  1. Deformable image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  2. Large-Scale Demand Driven Design of a Customized Bus Network: A Methodological Framework and Beijing Case Study

    Directory of Open Access Journals (Sweden)

    Jihui Ma

    2017-01-01

    Full Text Available In recent years, an innovative public transportation (PT mode known as the customized bus (CB has been proposed and implemented in many cities in China to efficiently and effectively shift private car users to PT to alleviate traffic congestion and traffic-related environmental pollution. The route network design activity plays an important role in the CB operation planning process because it serves as the basis for other operation planning activities, for example, timetable development, vehicle scheduling, and crew scheduling. In this paper, according to the demand characteristics and operational purpose, a methodological framework that includes the elements of large-scale travel demand data processing and analysis, hierarchical clustering-based route origin-destination (OD region division, route OD region pairing, and a route selection model is proposed for CB network design. Considering the operating cost and social benefits, a route selection model is proposed and a branch-and-bound-based solution method is developed. In addition, a computer-aided program is developed to analyze a real-world Beijing CB route network design problem. The results of the case study demonstrate that the current CB network of Beijing can be significantly improved, thus demonstrating the effectiveness of the proposed methodology.

  3. Synthesis and Design of Biorefinery Processing Networks with Uncertainty and Sustainability analysis

    DEFF Research Database (Denmark)

    Cheali, Peam; Gernaey, Krist; Sin, Gürkan

    combinations of processing networks. The optimization of the network is formulated as a mixed integer nonlinear programming type of problem and solved in GAMS. The methodology was applied for designing optimal biorefinery networks considering biochemical routes. Furthermore, the methodology has also been...... for processing renewable feedstocks, with the aim of bridging the gap for fuel, chemical and material production. This project is focusing on biorefinery network design, in particular for early stage design and development studies. Optimal biorefinery design is a challenging problem. It is a multi......-objective decision-making problem not only with respect to technical and economic feasibility but also with respect to environmental impacts, sustainability constraints and limited availability & uncertainties of input data at the early design stage. It is therefore useful to develop a systematic methodology...

  4. Development of Geometry Optimization Methodology with In-house CFD code, and Challenge in Applying to Fuel Assembly

    International Nuclear Information System (INIS)

    Jeong, J. H.; Lee, K. L.

    2016-01-01

    The wire spacer has important roles to avoid collisions between adjacent rods, to mitigate a vortex induced vibration, and to enhance convective heat transfer by wire spacer induced secondary flow. Many experimental and numerical works has been conducted to understand the thermal-hydraulics of the wire-wrapped fuel bundles. There has been enormous growth in computing capability. Recently, a huge increase of computer power allows to three-dimensional simulation of thermal-hydraulics of wire-wrapped fuel bundles. In this study, the geometry optimization methodology with RANS based in-house CFD (Computational Fluid Dynamics) code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI (General Grid Interface) function is developed for in-house CFD code. Furthermore, three-dimensional flow fields calculated with in-house CFD code are compared with those calculated with general purpose commercial CFD solver, CFX. The geometry optimization methodology with RANS based in-house CFD code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI function is developed for in-house CFD code as same as CFX. Even though both analyses are conducted with same computational meshes, numerical error due to GGI function locally occurred in only CFX solver around rod surface and boundary region between inner fluid region and outer fluid region.

  5. Validation methodology focussing on fuel efficiency as applied in the eCoMove project

    NARCIS (Netherlands)

    Themann, P.; Iasi, L.; Larburu, M.; Trommer, S.

    2012-01-01

    This paper discusses the validation approach applied in the eCoMove project (a large scale EU 7th Framework Programme project). In this project, applications are developed that on the one hand optimise network-wide traffic management and control, and on the other hand advise drivers on the most

  6. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    Energy Technology Data Exchange (ETDEWEB)

    Vismari, Lucio Flavio, E-mail: lucio.vismari@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil); Batista Camargo Junior, Joao, E-mail: joaocamargo@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil)

    2011-07-15

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  7. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    International Nuclear Information System (INIS)

    Vismari, Lucio Flavio; Batista Camargo Junior, Joao

    2011-01-01

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  8. Collaboration Networks in Applied Conservation Projects across Europe.

    Science.gov (United States)

    Nita, Andreea; Rozylowicz, Laurentiu; Manolache, Steluta; Ciocănea, Cristiana Maria; Miu, Iulia Viorica; Popescu, Viorel Dan

    2016-01-01

    The main funding instrument for implementing EU policies on nature conservation and supporting environmental and climate action is the LIFE Nature programme, established by the European Commission in 1992. LIFE Nature projects (>1400 awarded) are applied conservation projects in which partnerships between institutions are critical for successful conservation outcomes, yet little is known about the structure of collaborative networks within and between EU countries. The aim of our study is to understand the nature of collaboration in LIFE Nature projects using a novel application of social network theory at two levels: (1) collaboration between countries, and (2) collaboration within countries using six case studies: Western Europe (United Kingdom and Netherlands), Eastern Europe (Romania and Latvia) and Southern Europe (Greece and Portugal). Using data on 1261 projects financed between 1996 and 2013, we found that Italy was the most successful country not only in terms of awarded number of projects, but also in terms of overall influence being by far the most influent country in the European LIFE Nature network, having the highest eigenvector (0.989) and degree centrality (0.177). Another key player in the network is Netherlands, which ensures a fast communication flow with other network members (closeness-0.318) by staying connected with the most active countries. Although Western European countries have higher centrality scores than most of the Eastern European countries, our results showed that overall there is a lower tendency to create partnerships between different organization categories. Also, the comparisons of the six case studies indicates significant differences in regards to the pattern of creating partnerships, providing valuable information on collaboration on EU nature conservation. This study represents a starting point in predicting the formation of future partnerships within LIFE Nature programme, suggesting ways to improve transnational

  9. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  10. Cochrane Rehabilitation Methodology Committee: an international survey of priorities for future work.

    Science.gov (United States)

    Levack, William M; Meyer, Thorsten; Negrini, Stefano; Malmivaara, Antti

    2017-10-01

    Cochrane Rehabilitation aims to improve the application of evidence-based practice in rehabilitation. It also aims to support Cochrane in the production of reliable, clinically meaningful syntheses of evidence related to the practice of rehabilitation, while accommodating the many methodological challenges facing the field. To this end, Cochrane Rehabilitation established a Methodology Committee to examine, explore and find solutions for the methodological challenges related to evidence synthesis and knowledge translation in rehabilitation. We conducted an international online survey via Cochrane Rehabilitation networks to canvass opinions regarding the future work priorities for this committee and to seek information on people's current capabilities to assist with this work. The survey findings indicated strongest interest in work on how reviewers have interpreted and applied Cochrane methods in reviews on rehabilitation topics in the past, and on gathering a collection of existing publications on review methods for undertaking systematic reviews relevant to rehabilitation. Many people are already interested in contributing to the work of the Methodology Committee and there is a large amount of expertise for this work in the extended Cochrane Rehabilitation network already.

  11. Effective network inference through multivariate information transfer estimation

    Science.gov (United States)

    Dahlqvist, Carl-Henrik; Gnabo, Jean-Yves

    2018-06-01

    Network representation has steadily gained in popularity over the past decades. In many disciplines such as finance, genetics, neuroscience or human travel to cite a few, the network may not directly be observable and needs to be inferred from time-series data, leading to the issue of separating direct interactions between two entities forming the network from indirect interactions coming through its remaining part. Drawing on recent contributions proposing strategies to deal with this problem such as the so-called "global silencing" approach of Barzel and Barabasi or "network deconvolution" of Feizi et al. (2013), we propose a novel methodology to infer an effective network structure from multivariate conditional information transfers. Its core principal is to test the information transfer between two nodes through a step-wise approach by conditioning the transfer for each pair on a specific set of relevant nodes as identified by our algorithm from the rest of the network. The methodology is model free and can be applied to high-dimensional networks with both inter-lag and intra-lag relationships. It outperforms state-of-the-art approaches for eliminating the redundancies and more generally retrieving simulated artificial networks in our Monte-Carlo experiments. We apply the method to stock market data at different frequencies (15 min, 1 h, 1 day) to retrieve the network of US largest financial institutions and then document how bank's centrality measurements relate to bank's systemic vulnerability.

  12. Research Methodologies in Supply Chain Management

    DEFF Research Database (Denmark)

    Kotzab, Herbert

    . Within the 36 chapters 70 authors bring together a rich selection of theoretical and practical examples of how research methodologies are applied in supply chain management. The book contains papers on theoretical implications as well as papers on a range of key methods, such as modelling, surveys, case...... studies or action research. It will be of great interest to researchers in the area of supply chain management and logistics, but also to neighbouring fields, such as network management or global operations.......While supply chain management has risen to great prominence in recent year, there are hardly related developments in research methodologies. Yet, as supply chains cover more than one company, one central issue is how to collect and analyse data along the whole or relevant part of the supply chain...

  13. The Private Lives of Minerals: Social Network Analysis Applied to Mineralogy and Petrology

    Science.gov (United States)

    Hazen, R. M.; Morrison, S. M.; Fox, P. A.; Golden, J. J.; Downs, R. T.; Eleish, A.; Prabhu, A.; Li, C.; Liu, C.

    2016-12-01

    Comprehensive databases of mineral species (rruff.info/ima) and their geographic localities and co-existing mineral assemblages (mindat.org) reveal patterns of mineral association and distribution that mimic social networks, as commonly applied to such varied topics as social media interactions, the spread of disease, terrorism networks, and research collaborations. Applying social network analysis (SNA) to common assemblages of rock-forming igneous and regional metamorphic mineral species, we find patterns of cohesion, segregation, density, and cliques that are similar to those of human social networks. These patterns highlight classic trends in lithologic evolution and are illustrated with sociograms, in which mineral species are the "nodes" and co-existing species form "links." Filters based on chemistry, age, structural group, and other parameters highlight visually both familiar and new aspects of mineralogy and petrology. We quantify sociograms with SNA metrics, including connectivity (based on the frequency of co-occurrence of mineral pairs), homophily (the extent to which co-existing mineral species share compositional and other characteristics), network closure (based on the degree of network interconnectivity), and segmentation (as revealed by isolated "cliques" of mineral species). Exploitation of large and growing mineral data resources with SNA offers promising avenues for discovering previously hidden trends in mineral diversity-distribution systematics, as well as providing new pedagogical approaches to teaching mineralogy and petrology.

  14. Urban Agglomerations in Regional Development: Theoretical, Methodological and Applied Aspects

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Shmidt

    2016-09-01

    Full Text Available The article focuses on the analysis of the major process of modern socio-economic development, such as the functioning of urban agglomerations. A short background of the economic literature on this phenomenon is given. There are the traditional (the concentration of urban types of activities, the grouping of urban settlements by the intensive production and labour communications and modern (cluster theories, theories of network society conceptions. Two methodological principles of studying the agglomeration are emphasized: the principle of the unity of the spatial concentration of economic activity and the principle of compact living of the population. The positive and negative effects of agglomeration in the economic and social spheres are studied. Therefore, it is concluded that the agglomeration is helpful in the case when it brings the agglomerative economy (the positive benefits from it exceed the additional costs. A methodology for examination the urban agglomeration and its role in the regional development is offered. The approbation of this methodology on the example of Chelyabinsk and Chelyabinsk region has allowed to carry out the comparative analysis of the regional centre and the whole region by the main socio-economic indexes under static and dynamic conditions, to draw the conclusions on a position of the city and the region based on such socio-economic indexes as an average monthly nominal accrued wage, the cost of fixed assets, the investments into fixed capital, new housing supply, a retail turnover, the volume of self-produced shipped goods, the works and services performed in the region. In the study, the analysis of a launching site of the Chelyabinsk agglomeration is carried out. It has revealed the following main characteristics of the core of the agglomeration in Chelyabinsk (structure feature, population, level of centralization of the core as well as the Chelyabinsk agglomeration in general (coefficient of agglomeration

  15. Social Network Analysis as a Methodological Approach to Explore Health Systems: A Case Study Exploring Support among Senior Managers/Executives in a Hospital Network.

    Science.gov (United States)

    De Brún, Aoife; McAuliffe, Eilish

    2018-03-13

    Health systems research recognizes the complexity of healthcare, and the interacting and interdependent nature of components of a health system. To better understand such systems, innovative methods are required to depict and analyze their structures. This paper describes social network analysis as a methodology to depict, diagnose, and evaluate health systems and networks therein. Social network analysis is a set of techniques to map, measure, and analyze social relationships between people, teams, and organizations. Through use of a case study exploring support relationships among senior managers in a newly established hospital group, this paper illustrates some of the commonly used network- and node-level metrics in social network analysis, and demonstrates the value of these maps and metrics to understand systems. Network analysis offers a valuable approach to health systems and services researchers as it offers a means to depict activity relevant to network questions of interest, to identify opinion leaders, influencers, clusters in the network, and those individuals serving as bridgers across clusters. The strengths and limitations inherent in the method are discussed, and the applications of social network analysis in health services research are explored.

  16. Social Network Analysis as a Methodological Approach to Explore Health Systems: A Case Study Exploring Support among Senior Managers/Executives in a Hospital Network

    Directory of Open Access Journals (Sweden)

    Aoife De Brún

    2018-03-01

    Full Text Available Health systems research recognizes the complexity of healthcare, and the interacting and interdependent nature of components of a health system. To better understand such systems, innovative methods are required to depict and analyze their structures. This paper describes social network analysis as a methodology to depict, diagnose, and evaluate health systems and networks therein. Social network analysis is a set of techniques to map, measure, and analyze social relationships between people, teams, and organizations. Through use of a case study exploring support relationships among senior managers in a newly established hospital group, this paper illustrates some of the commonly used network- and node-level metrics in social network analysis, and demonstrates the value of these maps and metrics to understand systems. Network analysis offers a valuable approach to health systems and services researchers as it offers a means to depict activity relevant to network questions of interest, to identify opinion leaders, influencers, clusters in the network, and those individuals serving as bridgers across clusters. The strengths and limitations inherent in the method are discussed, and the applications of social network analysis in health services research are explored.

  17. European Network of Excellence on NPP residual lifetime prediction methodologies (NULIFE)

    International Nuclear Information System (INIS)

    Badea, M.; Vidican, D.

    2006-01-01

    Within Europe massive investments in nuclear power have been made to meet present and future energy needs. The majority of nuclear reactors have been operating for longer than 20 years and their continuing safe operation depends crucially on effective lifetime management. Furthermore, to extend the economic return on investment and environmental benefits, it is necessary to ensure in advance the safe operation of nuclear reactors for 60 years, a period which is typically 20 years in excess of nominal design life. This depends on a clear understanding of, and predictive capability for, how safety margins may be maintained as components degrade under operational conditions. Ageing mechanisms, environment effects and complex loadings increase the likelihood of damage to safety relevant systems, structures and components. The ability to claim increased benefits from reduced conservatism via improved assessments is therefore of great value. Harmonisation and qualification are essential for industrial exploitation of approaches developed for life prediction methodology. Several European organisations and networks have been at the forefront of the development of advanced methodologies in this area. However, these efforts have largely been made at national level and their overall impact and benefit (in comparison to the situation in the USA) has been reduced by fragmentation. There is a need to restructure the networking approach in order to create a single organisational entity capable of working at European level to produce and exploit R and D in support of the safe and competitive operation of nuclear power plants. It is also critical to ensure the competitiveness of European plant life management (PLIM) services at international level, in particular with the USA and Asian countries. To the above challenges the European Network on European research in residual lifetime prediction methodologies (NULIFE) will: - Create a Europe-wide body in order to achieve scientific and

  18. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    Science.gov (United States)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  19. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  20. Applications of social media and social network analysis

    CERN Document Server

    Kazienko, Przemyslaw

    2015-01-01

    This collection of contributed chapters demonstrates a wide range of applications within two overlapping research domains: social media analysis and social network analysis. Various methodologies were utilized in the twelve individual chapters including static, dynamic and real-time approaches to graph, textual and multimedia data analysis. The topics apply to reputation computation, emotion detection, topic evolution, rumor propagation, evaluation of textual opinions, friend ranking, analysis of public transportation networks, diffusion in dynamic networks, analysis of contributors to commun

  1. Risk Based Inspection Methodology and Software Applied to Atmospheric Storage Tanks

    Science.gov (United States)

    Topalis, P.; Korneliussen, G.; Hermanrud, J.; Steo, Y.

    2012-05-01

    A new risk-based inspection (RBI) methodology and software is presented in this paper. The objective of this work is to allow management of the inspections of atmospheric storage tanks in the most efficient way, while, at the same time, accident risks are minimized. The software has been built on the new risk framework architecture, a generic platform facilitating efficient and integrated development of software applications using risk models. The framework includes a library of risk models and the user interface is automatically produced on the basis of editable schemas. This risk-framework-based RBI tool has been applied in the context of RBI for above-ground atmospheric storage tanks (AST) but it has been designed with the objective of being generic enough to allow extension to the process plants in general. This RBI methodology is an evolution of an approach and mathematical models developed for Det Norske Veritas (DNV) and the American Petroleum Institute (API). The methodology assesses damage mechanism potential, degradation rates, probability of failure (PoF), consequence of failure (CoF) in terms of environmental damage and financial loss, risk and inspection intervals and techniques. The scope includes assessment of the tank floor for soil-side external corrosion and product-side internal corrosion and the tank shell courses for atmospheric corrosion and internal thinning. It also includes preliminary assessment for brittle fracture and cracking. The data are structured according to an asset hierarchy including Plant, Production Unit, Process Unit, Tag, Part and Inspection levels and the data are inherited / defaulted seamlessly from a higher hierarchy level to a lower level. The user interface includes synchronized hierarchy tree browsing, dynamic editor and grid-view editing and active reports with drill-in capability.

  2. Applying network theory to prioritize multispecies habitat networks that are robust to climate and land-use change.

    Science.gov (United States)

    Albert, Cécile H; Rayfield, Bronwyn; Dumitru, Maria; Gonzalez, Andrew

    2017-12-01

    Designing connected landscapes is among the most widespread strategies for achieving biodiversity conservation targets. The challenge lies in simultaneously satisfying the connectivity needs of multiple species at multiple spatial scales under uncertain climate and land-use change. To evaluate the contribution of remnant habitat fragments to the connectivity of regional habitat networks, we developed a method to integrate uncertainty in climate and land-use change projections with the latest developments in network-connectivity research and spatial, multipurpose conservation prioritization. We used land-use change simulations to explore robustness of species' habitat networks to alternative development scenarios. We applied our method to 14 vertebrate focal species of periurban Montreal, Canada. Accounting for connectivity in spatial prioritization strongly modified conservation priorities and the modified priorities were robust to uncertain climate change. Setting conservation priorities based on habitat quality and connectivity maintained a large proportion of the region's connectivity, despite anticipated habitat loss due to climate and land-use change. The application of connectivity criteria alongside habitat-quality criteria for protected-area design was efficient with respect to the amount of area that needs protection and did not necessarily amplify trade-offs among conservation criteria. Our approach and results are being applied in and around Montreal and are well suited to the design of ecological networks and green infrastructure for the conservation of biodiversity and ecosystem services in other regions, in particular regions around large cities, where connectivity is critically low. © 2017 Society for Conservation Biology.

  3. MODELING AND STRUCTURING OF ENTERPRISE MANAGEMENT SYSTEM RESORT SPHERE BASED ON ELEMENTS OF NEURAL NETWORK THEORY: THE METHODOLOGICAL BASIS

    Directory of Open Access Journals (Sweden)

    Rena R. Timirualeeva

    2015-01-01

    Full Text Available The article describes the methodology of modeling andstructuring of business networks theory. Accounting ofenvironmental factors mega-, macro- and mesolevels, theinternal state of the managed system and the error management command execution by control system implemented inthis. The proposed methodology can improve the quality of enterprise management of resort complex through a moreflexible response to changes in the parameters of the internaland external environments.

  4. Summary of discrete fracture network modelling as applied to hydrogeology of the Forsmark and Laxemar sites

    International Nuclear Information System (INIS)

    Hartley, Lee; Roberts, David

    2013-04-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is responsible for the development of a deep geological repository for spent nuclear fuel. The permitting of such a repository is informed by assessment studies to estimate the risks of the disposal method. One of the potential risks involves the transport of radionuclides in groundwater from defective canisters in the repository to the accessible environment. The Swedish programme for geological disposal of spent nuclear fuel has involved undertaking detailed surface-based site characterisation studies at two different sites, Forsmark and Laxemar-Simpevarp. A key component of the hydrogeological modelling of these two sites has been the development of Discrete Fracture Network (DFN) concepts of groundwater flow through the fractures in the crystalline rocks present. A discrete fracture network model represents some of the characteristics of fractures explicitly, such as their, orientation, intensity, size, spatial distribution, shape and transmissivity. This report summarises how the discrete fracture network methodology has been applied to model groundwater flow and transport at Forsmark and Laxemar. The account has involved summarising reports previously published by SKB between 2001 and 2011. The report describes the conceptual framework and assumptions used in interpreting site data, and in particular how data has been used to calibrate the various parameters that define the discrete fracture network representation of bedrock hydrogeology against borehole geologic and hydraulic data. Steps taken to confirm whether the developed discrete fracture network models provide a description of regional-scale groundwater flow and solute transport consistent with wider hydraulic tests hydrochemical data from Forsmark and Laxemar are discussed. It illustrates the use of derived hydrogeological DFN models in the simulations of the temperate period hydrogeology that provided input to radionuclide transport

  5. Summary of discrete fracture network modelling as applied to hydrogeology of the Forsmark and Laxemar sites

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Roberts, David

    2013-04-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is responsible for the development of a deep geological repository for spent nuclear fuel. The permitting of such a repository is informed by assessment studies to estimate the risks of the disposal method. One of the potential risks involves the transport of radionuclides in groundwater from defective canisters in the repository to the accessible environment. The Swedish programme for geological disposal of spent nuclear fuel has involved undertaking detailed surface-based site characterisation studies at two different sites, Forsmark and Laxemar-Simpevarp. A key component of the hydrogeological modelling of these two sites has been the development of Discrete Fracture Network (DFN) concepts of groundwater flow through the fractures in the crystalline rocks present. A discrete fracture network model represents some of the characteristics of fractures explicitly, such as their, orientation, intensity, size, spatial distribution, shape and transmissivity. This report summarises how the discrete fracture network methodology has been applied to model groundwater flow and transport at Forsmark and Laxemar. The account has involved summarising reports previously published by SKB between 2001 and 2011. The report describes the conceptual framework and assumptions used in interpreting site data, and in particular how data has been used to calibrate the various parameters that define the discrete fracture network representation of bedrock hydrogeology against borehole geologic and hydraulic data. Steps taken to confirm whether the developed discrete fracture network models provide a description of regional-scale groundwater flow and solute transport consistent with wider hydraulic tests hydrochemical data from Forsmark and Laxemar are discussed. It illustrates the use of derived hydrogeological DFN models in the simulations of the temperate period hydrogeology that provided input to radionuclide transport

  6. Bio-inspired algorithms applied to molecular docking simulations.

    Science.gov (United States)

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  7. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    Science.gov (United States)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  8. Emerging Concepts and Methodologies in Cancer Biomarker Discovery.

    Science.gov (United States)

    Lu, Meixia; Zhang, Jinxiang; Zhang, Lanjing

    2017-01-01

    Cancer biomarker discovery is a critical part of cancer prevention and treatment. Despite the decades of effort, only a small number of cancer biomarkers have been identified for and validated in clinical settings. Conceptual and methodological breakthroughs may help accelerate the discovery of additional cancer biomarkers, particularly their use for diagnostics. In this review, we have attempted to review the emerging concepts in cancer biomarker discovery, including real-world evidence, open access data, and data paucity in rare or uncommon cancers. We have also summarized the recent methodological progress in cancer biomarker discovery, such as high-throughput sequencing, liquid biopsy, big data, artificial intelligence (AI), and deep learning and neural networks. Much attention has been given to the methodological details and comparison of the methodologies. Notably, these concepts and methodologies interact with each other and will likely lead to synergistic effects when carefully combined. Newer, more innovative concepts and methodologies are emerging as the current emerging ones became mainstream and widely applied to the field. Some future challenges are also discussed. This review contributes to the development of future theoretical frameworks and technologies in cancer biomarker discovery and will contribute to the discovery of more useful cancer biomarkers.

  9. An Intuitive Dominant Test Algorithm of CP-nets Applied on Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Liu Zhaowei

    2014-07-01

    Full Text Available A wireless sensor network is of spatially distributed with autonomous sensors, just like a multi-Agent system with single Agent. Conditional Preference networks is a qualitative tool for representing ceteris paribus (all other things being equal preference statements, it has been a research hotspot in artificial intelligence recently. But the algorithm and complexity of strong dominant test with respect to binary-valued structure CP-nets have not been solved, and few researchers address the application to other domain. In this paper, strong dominant test and application of CP-nets are studied in detail. Firstly, by constructing induced graph of CP-nets and studying its properties, we make a conclusion that the problem of strong dominant test on binary-valued CP-nets is single source shortest path problem essentially, so strong dominant test problem can be solved by improved Dijkstra’s algorithm. Secondly, we apply the algorithm above mentioned to the completeness of wireless sensor network, and design a completeness judging algorithm based on strong dominant test. Thirdly, we apply the algorithm on wireless sensor network to solve routing problem. In the end, we point out some interesting work in the future.

  10. Social Network Analysis and informal trade

    DEFF Research Database (Denmark)

    Walther, Olivier

    networks can be applied to better understand informal trade in developing countries, with a particular focus on Africa. The paper starts by discussing some of the fundamental concepts developed by social network analysis. Through a number of case studies, we show how social network analysis can...... illuminate the relevant causes of social patterns, the impact of social ties on economic performance, the diffusion of resources and information, and the exercise of power. The paper then examines some of the methodological challenges of social network analysis and how it can be combined with other...... approaches. The paper finally highlights some of the applications of social network analysis and their implications for trade policies....

  11. BAT methodology applied to the construction of new CCNN

    International Nuclear Information System (INIS)

    Vilches Rodriguez, E.; Campos Feito, O.; Gonzalez Delgado, J.

    2012-01-01

    The BAT methodology should be used in all phases of the project, from preliminary studies and design to decommissioning, gaining special importance in radioactive waste management and environmental impact studies. Adequate knowledge of this methodology will streamline the decision making process and to facilitate the relationship with regulators and stake holders.

  12. Applying Statistical Process Quality Control Methodology to Educational Settings.

    Science.gov (United States)

    Blumberg, Carol Joyce

    A subset of Statistical Process Control (SPC) methodology known as Control Charting is introduced. SPC methodology is a collection of graphical and inferential statistics techniques used to study the progress of phenomena over time. The types of control charts covered are the null X (mean), R (Range), X (individual observations), MR (moving…

  13. Computer Network Operations Methodology

    Science.gov (United States)

    2004-03-01

    means of their computer information systems. Disrupt - This type of attack focuses on disrupting as “attackers might surreptitiously reprogram enemy...by reprogramming the computers that control distribution within the power grid. A disruption attack introduces disorder and inhibits the effective...between commanders. The use of methodologies is widespread and done subconsciously to assist individuals in decision making. The processes that

  14. Routes towards supplier and production network internationalisation

    OpenAIRE

    A. CAMUFFO; FURLAN A; ROMANO P; VINELLI A

    2007-01-01

    Purpose – The purpose of this paper is to investigate routes towards supplier and production network internationalisation. Design/methodology/approach – Multiple case-study analysis has been applied to a sample of 11 Italian footwear and apparel companies with headquarters located in the North-east of Italy. Within and cross-case analyses illustrate and compare how these firms relocated one or more segments of their supplier and production networks to Romania. Findings – The...

  15. Application of the Wallingford Procedure to sewer network rehabilitation. Rehabilitacion de redes de alcantarillado. Aplicando el sistema de Wallingford

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, M. (Universidad Politecnica de de Cataluna. Barcelona (Spain)); Lopez, R. (Universitat de Lleida. Lleida (Spain))

    1999-01-01

    In this paper we present a summary of the Wallingford Procedure application to sewer network rehabilitation studies. After the methodology of the Procedure is revised, an application to a sewer network of Sant Boi de Llobregat is showed. Flow survey campaigns, calibration and validation processes and the alternative proposed to improve the initial situation are described. Finally, the benefits to apply such methodologies to the sewer network rehabilitation analysis are exposed. (Author) 3 refs.

  16. Application of the Wallingford Procedure to sewer network rehabilitation; Rehabilitacion de redes de alcantarillado. Aplicando el sistema de Wallingford

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, M. [Universidad Politecnica de de Cataluna. Barcelona (Spain); Lopez, R. [Universitat de Lleida. Lleida (Spain)

    1999-11-01

    In this paper we present a summary of the Wallingford Procedure application to sewer network rehabilitation studies. After the methodology of the Procedure is revised, an application to a sewer network of Sant Boi de Llobregat is showed. Flow survey campaigns, calibration and validation processes and the alternative proposed to improve the initial situation are described. Finally, the benefits to apply such methodologies to the sewer network rehabilitation analysis are exposed. (Author) 3 refs.

  17. Training of reverse propagation neural networks applied to neutron dosimetry

    International Nuclear Information System (INIS)

    Hernandez P, C. F.; Martinez B, M. R.; Leon P, A. A.; Espinoza G, J. G.; Castaneda M, V. H.; Solis S, L. O.; Castaneda M, R.; Ortiz R, M.; Vega C, H. R.; Mendez V, R.; Gallego, E.; De Sousa L, M. A.

    2016-10-01

    methodology of artificial neural networks where the parameters of the network that produced the best results were selected. (Author)

  18. Road Network Vulnerability Analysis Based on Improved Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yunpeng Wang

    2014-01-01

    Full Text Available We present an improved ant colony algorithm-based approach to assess the vulnerability of a road network and identify the critical infrastructures. This approach improves computational efficiency and allows for its applications in large-scale road networks. This research involves defining the vulnerability conception, modeling the traffic utility index and the vulnerability of the road network, and identifying the critical infrastructures of the road network. We apply the approach to a simple test road network and a real road network to verify the methodology. The results show that vulnerability is directly related to traffic demand and increases significantly when the demand approaches capacity. The proposed approach reduces the computational burden and may be applied in large-scale road network analysis. It can be used as a decision-supporting tool for identifying critical infrastructures in transportation planning and management.

  19. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    International Nuclear Information System (INIS)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez

    2015-01-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  20. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  1. From experience: applying the risk diagnosing methodology

    NARCIS (Netherlands)

    Keizer, Jimme A.; Halman, Johannes I.M.; Song, Michael

    2002-01-01

    No risk, no reward. Companies must take risks to launch new products speedily and successfully. The ability to diagnose and manage risks is increasingly considered of vital importance in high-risk innovation. This article presents the Risk Diagnosing Methodology (RDM), which aims to identify and

  2. From experience : applying the risk diagnosing methodology

    NARCIS (Netherlands)

    Keizer, J.A.; Halman, J.I.M.; Song, X.M.

    2002-01-01

    No risk, no reward. Companies must take risks to launch new products speedily and successfully. The ability to diagnose and manage risks is increasingly considered of vital importance in high-risk innovation. This article presents the Risk Diagnosing Methodology (RDM), which aims to identify and

  3. Gas ultracentrifuge separative parameters modeling using hybrid neural networks

    International Nuclear Information System (INIS)

    Crus, Maria Ursulina de Lima

    2005-01-01

    A hybrid neural network is developed for the calculation of the separative performance of an ultracentrifuge. A feed forward neural network is trained to estimate the internal flow parameters of a gas ultracentrifuge, and then these parameters are applied in the diffusion equation. For this study, a 573 experimental data set is used to establish the relation between the separative performance and the controlled variables. The process control variables considered are: the feed flow rate F, the cut θ and the product pressure Pp. The mechanical arrangements consider the radial waste scoop dimension, the rotating baffle size D s and the axial feed location Z E . The methodology was validated through the comparison of the calculated separative performance with experimental values. This methodology may be applied to other processes, just by adapting the phenomenological procedures. (author)

  4. A small-world methodology of analysis of interchange energy-networks: The European behaviour in the economical crisis

    International Nuclear Information System (INIS)

    Dassisti, M.; Carnimeo, L.

    2013-01-01

    European energy policy pursues the objective of a sustainable, competitive and reliable supply of energy. In 2007, the European Commission adopted a proper energy policy for Europe supported by several documents and included an action plan to meet the major energy challenges Europe has to face. A farsighted diversified yearly mix of energies was suggested to countries, aiming at increasing security of supply and efficiency, but a wide and systemic view of energy interchanges between states was missing. In this paper, a Small-World methodology of analysis of Interchange Energy-Networks (IENs) is presented, with the aim of providing a useful tool for planning sustainable energy policies. A proof case is presented to validate the methodology by considering the European IEN behaviour in the period of economical crisis. This network is approached as a Small World Net from a modelling point of view, by supposing that connections between States are characterised by a probability value depending on economic/political relations between countries. - Highlights: • Different view of the imports and exports of electric energy flows between European for potential use in ruling exchanges. • Panel data from 1996 to 2010 as part of a network of exchanges was considered from Eurostat official database. • The European import/export energy flows modelled as a network with Small World phenomena, interpreting the evolution over the years. • Interesting systemic tool for ruling and governing energy flows between countries

  5. Signed directed social network analysis applied to group conflict

    DEFF Research Database (Denmark)

    Zheng, Quan; Skillicorn, David; Walther, Olivier

    2015-01-01

    Real-world social networks contain relationships of multiple different types, but this richness is often ignored in graph-theoretic modelling. We show how two recently developed spectral embedding techniques, for directed graphs (relationships are asymmetric) and for signed graphs (relationships...... are both positive and negative), can be combined. This combination is particularly appropriate for intelligence, terrorism, and law enforcement applications. We illustrate by applying the novel embedding technique to datasets describing conflict in North-West Africa, and show how unusual interactions can...

  6. New challenges and opportunities in the eddy-covariance methodology for long-term monitoring networks

    Science.gov (United States)

    Papale, Dario; Fratini, Gerardo

    2013-04-01

    Eddy-covariance is the most direct and most commonly applied methodology for measuring exchange fluxes of mass and energy between ecosystems and the atmosphere. In recent years, the number of environmental monitoring stations deploying eddy-covariance systems increased dramatically at the global level, exceeding 500 sites worldwide and covering most climatic and ecological regions. Several long-term environmental research infrastructures such as ICOS, NEON and AmeriFlux selected the eddy-covariance as a method to monitor GHG fluxes and are currently collaboratively working towards defining common measurements standards, data processing approaches, QA/QC procedures and uncertainty estimation strategies, to the aim of increasing defensibility of resulting fluxes and intra and inter-comparability of flux databases. In the meanwhile, the eddy-covariance research community keeps identifying technical and methodological flaws that, in some cases, can introduce - and can have introduced to date - significant biases in measured fluxes or increase their uncertainty. Among those, we identify three issues of presumably greater concern, namely: (1) strong underestimation of water vapour fluxes in closed-path systems, and its dependency on relative humidity; (2) flux biases induced by erroneous measurement of absolute gas concentrations; (3) and systematic errors due to underestimation of vertical wind variance in non-orthogonal anemometers. If not properly addressed, these issues can reduce the quality and reliability of the method, especially as a standard methodology in long-term monitoring networks. In this work, we review the status of the art regarding such problems, and propose new evidences based on field experiments as well as numerical simulations. Our analyses confirm the potential relevance of these issues but also hint at possible coping approaches, to minimize problems during setup design, data collection and post-field flux correction. Corrections are under

  7. A dynamic systems engineering methodology research study. Phase 2: Evaluating methodologies, tools, and techniques for applicability to NASA's systems projects

    Science.gov (United States)

    Paul, Arthur S.; Gill, Tepper L.; Maclin, Arlene P.

    1989-01-01

    A study of NASA's Systems Management Policy (SMP) concluded that the primary methodology being used by the Mission Operations and Data Systems Directorate and its subordinate, the Networks Division, is very effective. Still some unmet needs were identified. This study involved evaluating methodologies, tools, and techniques with the potential for resolving the previously identified deficiencies. Six preselected methodologies being used by other organizations with similar development problems were studied. The study revealed a wide range of significant differences in structure. Each system had some strengths but none will satisfy all of the needs of the Networks Division. Areas for improvement of the methodology being used by the Networks Division are listed with recommendations for specific action.

  8. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    International Nuclear Information System (INIS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.

    2017-01-01

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  9. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Fernandez, R. Castillo; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anad?n, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Sanchez, L. Escudero; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C. -M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Caicedo, D. A. Martinez; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; S?ldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y. -T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  10. VEM: Virtual Enterprise Methodology

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a virtual enterprise methodology (VEM) that outlines activities to consider when setting up and managing virtual enterprises (VEs). As a methodology the VEM helps companies to ask the right questions when preparing for and setting up an enterprise network, which works...

  11. Socio technical modelling of a nuclear: case study applied to the Ionizing Radiation Metrology National Laboratory

    International Nuclear Information System (INIS)

    Acar, Maria Elizabeth Dias

    2015-01-01

    A methodology combining process mapping and analysis; knowledge elicitation mapping and critical analysis; and socio technical analysis based on social network analysis was conceived. The methodology was applied to a small knowledge intensive organization - LNMRI, and has allowed the appraisal of the main intellectual assets and their ability to evolve. In this sense, based on real issues such as attrition, the impacts of probable future scenarios were assessed. For such task, a multimodal network of processes, knowledge objects and people was analyzed using a set of appropriate metrics and means, including sphere of influence of key nodes. To differentiate the ability of people's role playing in the processes, some nodes' attributes were used to provide partition criteria for the network and thus the ability to differentiate the impact of potential loss of supervisors and operators. The proposed methodology has allowed for: 1) the identification of knowledge objects and their sources; 2) mapping and ranking of these objects according to their relevance and 3) the assessment of vulnerabilities in LNMRI's network structure and 4) revealing of informal mechanisms of knowledge sharing The conceived methodological framework has proved to be a robust tool for a broad diagnosis to support succession planning and also the organizational strategic planning. (author)

  12. Building Modelling Methodologies for Virtual District Heating and Cooling Networks

    Energy Technology Data Exchange (ETDEWEB)

    Saurav, Kumar; Choudhury, Anamitra R.; Chandan, Vikas; Lingman, Peter; Linder, Nicklas

    2017-10-26

    District heating and cooling systems (DHC) are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increase the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components interacting with each other. In this paper we present two building methodologies to model the consumer buildings. These models will be further integrated with network model and the control system layer to create a virtual test bed for the entire DHC system. The model is validated using data collected from a real life DHC system located at Lulea, a city on the coast of northern Sweden. The test bed will be then used for simulating various test cases such as peak energy reduction, overall demand reduction etc.

  13. Hyperspectral and thermal methodologies applied to landslide monitoring

    Science.gov (United States)

    Vellico, Michela; Sterzai, Paolo; Pietrapertosa, Carla; Mora, Paolo; Berti, Matteo; Corsini, Alessandro; Ronchetti, Francesco; Giannini, Luciano; Vaselli, Orlando

    2010-05-01

    Landslide monitoring is a very actual topic. Landslides are a widespread phenomenon over the European territory and these phenomena have been responsible of huge economic losses. The aim of the WISELAND research project (Integrated Airborne and Wireless Sensor Network systems for Landslide Monitoring), funded by the Italian Government, is to test new monitoring techniques capable to rapidly and successfully characterize large landslides in fine soils. Two active earthflows in the Northern Italian Appenines have been chosen as test sites and investigated: Silla (Bologna Province) and Valoria (Modena Province). The project implies the use of remote sensing methodologies, with particular focus on the joint use of airborne Lidar, hyperspectral and thermal systems. These innovative techniques give promising results, since they allow to detect the principal landslide components and to evaluate the spatial distribution of parameters relevant to landslide dynamics such as surface water content and roughness. In this paper we put the attention on the response of the terrain related to the use of a hyperspectral system and its integration with the complementary information obtained using a thermal sensor. The potentiality of a hyperspectral dataset acquired in the VNIR (Visible Near Infrared field) and of the spectral response of the terrain could be high since they give important information both on the soil and on the vegetation status. Several significant indexes can be calculated, such as NDVI, obtained considering a band in the Red field and a band in the Infrared field; it gives information on the vegetation health and indirectly on the water content of soils. This is a key point that bridges hyperspectral and thermal datasets. Thermal infrared data are closely related to soil moisture, one of the most important parameter affecting surface stability in soil slopes. Effective stresses and shear strength in unsaturated soils are directly related to water content, and

  14. Topological Taxonomy of Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Carlo Giudicianni

    2018-04-01

    Full Text Available Water Distribution Networks (WDNs can be regarded as complex networks and modeled as graphs. In this paper, Complex Network Theory is applied to characterize the behavior of WDNs from a topological point of view, reviewing some basic metrics, exploring their fundamental properties and the relationship between them. The crucial aim is to understand and describe the topology of WDNs and their structural organization to provide a novel tool of analysis which could help to find new solutions to several arduous problems of WDNs. The aim is to understand the role of the topological structure in the WDNs functioning. The methodology is applied to 21 existing networks and 13 literature networks. The comparison highlights some topological peculiarities and the possibility to define a set of best design parameters for ex-novo WDNs that could also be used to build hypothetical benchmark networks retaining the typical structure of real WDNs. Two well-known types of network ((a square grid; and (b random graph are used for comparison, aiming at defining a possible mathematical model for WDNs. Finally, the interplay between topology and some performance requirements of WDNs is discussed.

  15. The application of complex network time series analysis in turbulent heated jets

    International Nuclear Information System (INIS)

    Charakopoulos, A. K.; Karakasidis, T. E.; Liakopoulos, A.; Papanicolaou, P. N.

    2014-01-01

    In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topological properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics

  16. Research Network of Tehran Defined Population: Methodology and Establishment

    Directory of Open Access Journals (Sweden)

    Ali-Asghar Kolahi

    2015-12-01

    Full Text Available Background: We need a defined population for determining prevalence and incidence of diseases, as well as conducting interventional, cohort and longitudinal studies, calculating correct and timely public health indicators, assessing actual health needs of community, performing educational programs and interventions to promote healthy lifestyle, and enhancing quality of primary health services.The objective of this project was to determine a defined population which is representative of Tehran, the Capital of Iran. This article reports the methodology and establishment of the research network of Tehran defined population.Methods: This project started by selecting two urban health centers from each of the five district health centers affiliated to Shahid Beheshti University of Medical Sciences in 2012. Inside each selected urban health center, one defined population research station was established. Two new centers have been added during 2013 and 2014. For the time being, the number of the covered population of the network has reached 40000 individuals. The most important criterion for the defined population has been to be representative of the population of Tehran. For this, we selected two urban health centers from 12 of 22 municipality districts and from each of the five different socioeconomic of Greater Tehran. Merely 80000 individuals in neighborhoods of each defined population research station were considered as control group of the project.Findings: Totally we selected 12 defined population research stations and their under-covered population developed a defined population which is representative of Tehran population.Conclusion: a population lab is ready now in metropolitan of Tehran.

  17. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NARCIS (Netherlands)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-01-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a

  18. Conceptual and methodological biases in network models.

    Science.gov (United States)

    Lamm, Ehud

    2009-10-01

    Many natural and biological phenomena can be depicted as networks. Theoretical and empirical analyses of networks have become prevalent. I discuss theoretical biases involved in the delineation of biological networks. The network perspective is shown to dissolve the distinction between regulatory architecture and regulatory state, consistent with the theoretical impossibility of distinguishing a priori between "program" and "data." The evolutionary significance of the dynamics of trans-generational and interorganism regulatory networks is explored and implications are presented for understanding the evolution of the biological categories development-heredity, plasticity-evolvability, and epigenetic-genetic.

  19. A methodology based on dynamic artificial neural network for short-term forecasting of the power output of a PV generator

    International Nuclear Information System (INIS)

    Almonacid, F.; Pérez-Higueras, P.J.; Fernández, Eduardo F.; Hontoria, L.

    2014-01-01

    Highlights: • The output of the majority of renewables energies depends on the variability of the weather conditions. • The short-term forecast is going to be essential for effectively integrating solar energy sources. • A new method based on artificial neural network to predict the power output of a PV generator one hour ahead is proposed. • This new method is based on dynamic artificial neural network to predict global solar irradiance and the air temperature. • The methodology developed can be used to estimate the power output of a PV generator with a satisfactory margin of error. - Abstract: One of the problems of some renewables energies is that the output of these kinds of systems is non-dispatchable depending on variability of weather conditions that cannot be predicted and controlled. From this point of view, the short-term forecast is going to be essential for effectively integrating solar energy sources, being a very useful tool for the reliability and stability of the grid ensuring that an adequate supply is present. In this paper a new methodology for forecasting the output of a PV generator one hour ahead based on dynamic artificial neural network is presented. The results of this study show that the proposed methodology could be used to forecast the power output of PV systems one hour ahead with an acceptable degree of accuracy

  20. Economic evaluation of health promotion interventions for older people: do applied economic studies meet the methodological challenges?

    Science.gov (United States)

    Huter, Kai; Dubas-Jakóbczyk, Katarzyna; Kocot, Ewa; Kissimova-Skarbek, Katarzyna; Rothgang, Heinz

    2018-01-01

    In the light of demographic developments health promotion interventions for older people are gaining importance. In addition to methodological challenges arising from the economic evaluation of health promotion interventions in general, there are specific methodological problems for the particular target group of older people. There are especially four main methodological challenges that are discussed in the literature. They concern measurement and valuation of informal caregiving, accounting for productivity costs, effects of unrelated cost in added life years and the inclusion of 'beyond-health' benefits. This paper focuses on the question whether and to what extent specific methodological requirements are actually met in applied health economic evaluations. Following a systematic review of pertinent health economic evaluations, the included studies are analysed on the basis of four assessment criteria that are derived from methodological debates on the economic evaluation of health promotion interventions in general and economic evaluations targeting older people in particular. Of the 37 studies included in the systematic review, only very few include cost and outcome categories discussed as being of specific relevance to the assessment of health promotion interventions for older people. The few studies that consider these aspects use very heterogeneous methods, thus there is no common methodological standard. There is a strong need for the development of guidelines to achieve better comparability and to include cost categories and outcomes that are relevant for older people. Disregarding these methodological obstacles could implicitly lead to discrimination against the elderly in terms of health promotion and disease prevention and, hence, an age-based rationing of public health care.

  1. The social processes of production and validation of knowledge in particle physics: Preliminary theoretical and methodological observations

    OpenAIRE

    Bellotti, Elisa

    2011-01-01

    This paper explores the complementarities and differences between Bourdieu's Field Theory and Social Network Analysis from both a theoretical and methodological perspective. The argument is applied to a case study about the social production and validation of knowledge in particle physics in Italy. The methodological choices that have lead the research project are presented and justified, and provide a good example about the strengths and the weaknesses of the two theoretical perspectives com...

  2. Methodology and boundary conditions applied to the analysis on internal flooding for Kozloduy NPP units 5 and 6

    International Nuclear Information System (INIS)

    Demireva, E.; Goranov, S.; Horstmann, R.

    2004-01-01

    Within the Modernization Program of Units 5 and 6 of Kozloduy NPP a comprehensive analysis of internal flooding has been carried out for the reactor building outside the containment and for the turbine hall by FRAMATOME ANP and ENPRO Consult. The objective of this presentation is to provide information on the applied methodology and boundary conditions. A separate report called 'Methodology and boundary conditions' has been elaborated in order to provide the fundament for the study. The methodology report provides definitions and advice for the following topics: scope of the study; safety objectives; basic assumptions and postulates (plant conditions, grace periods for manual actions, single failure postulate, etc.); sources of flooding (postulated piping leaks and ruptures, malfunctions and personnel error); main activities of the flooding analysis; study conclusions and suggestions of remedial measures. (authors)

  3. Hybrid response surface methodology-artificial neural network optimization of drying process of banana slices in a forced convective dryer.

    Science.gov (United States)

    Taheri-Garavand, Amin; Karimi, Fatemeh; Karimi, Mahmoud; Lotfi, Valiullah; Khoobbakht, Golmohammad

    2018-06-01

    The aim of the study is to fit models for predicting surfaces using the response surface methodology and the artificial neural network to optimize for obtaining the maximum acceptability using desirability functions methodology in a hot air drying process of banana slices. The drying air temperature, air velocity, and drying time were chosen as independent factors and moisture content, drying rate, energy efficiency, and exergy efficiency were dependent variables or responses in the mentioned drying process. A rotatable central composite design as an adequate method was used to develop models for the responses in the response surface methodology. Moreover, isoresponse contour plots were useful to predict the results by performing only a limited set of experiments. The optimum operating conditions obtained from the artificial neural network models were moisture content 0.14 g/g, drying rate 1.03 g water/g h, energy efficiency 0.61, and exergy efficiency 0.91, when the air temperature, air velocity, and drying time values were equal to -0.42 (74.2 ℃), 1.00 (1.50 m/s), and -0.17 (2.50 h) in the coded units, respectively.

  4. Improving the Reliability of Network Metrics in Structural Brain Networks by Integrating Different Network Weighting Strategies into a Single Graph

    Directory of Open Access Journals (Sweden)

    Stavros I. Dimitriadis

    2017-12-01

    level. Importantly, both network and node-wise ICCs of network metrics derived from the topologically filtered ISBWN (ISWBNTF, were further improved compared to the non-filtered ISWBN. Finally, in the recognition accuracy tests, we assigned each single ISWBNTF to the correct subject. We also applied our methodology to a second dataset of diffusion-weighted MRI in healthy controls and individuals with psychotic experience. Following a binary classification scheme, the classification performance based on ISWBNTF outperformed the nine different weighting strategies and the ISWBN. Overall, these findings suggest that the proposed methodology results in improved characterization of genuine between-subject differences in connectivity leading to the possibility of network-based structural phenotyping.

  5. Applying of component system development in object methodology

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    -oriented methodology (Arlo, Neust, 2007, (Kan, Müller, 2005, (​​Krutch, 2003 for problem domains with double-layer process logic. There is indicated an integration method, based on a certain meta-model (Applying of the Component system Development in object Methodology and leading to the component system formation. The mentioned meta-model is divided into partial workflows that are located in different stages of a classic object process-based methodology. Into account there are taken the consistency of the input and output artifacts in working practices of the meta-model and mentioned object methodology. This paper focuses on static component systems that are starting to explore dynamic and mobile component systems.In addition, in the contribution the component system is understood as a specific system, for its system properties and basic terms notation being used a set and graph and system algebra.

  6. Neural networks (NN applied to the commercial properties valuation

    Directory of Open Access Journals (Sweden)

    J. M. Núñez Tabales

    2017-03-01

    Full Text Available Several agents, such as buyers and sellers, or local or tax authorities need to estimate the value of properties. There are different approaches to obtain the market price of a dwelling. Many papers have been produced in the academic literature for such purposes, but, these are, almost always, oriented to estimate hedonic prices of residential properties, such as houses or apartments. Here these methodologies are used in the field of estimate market price of commercial premises, using AI techniques. A case study is developed in Cordova —city in the South of Spain—. Neural Networks are an attractive alternative to the traditional hedonic modelling approaches, as they are better adapted to non-linearities of causal relationships and they also produce smaller valuation errors. It is also possible, from the NN model, to obtain implicit prices associated to the main attributes that can explain the variability of the market price of commercial properties.

  7. Variable identification in group method of data handling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)

    2011-07-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  8. Variable identification in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2011-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  9. WGCNA: an R package for weighted correlation network analysis.

    Science.gov (United States)

    Langfelder, Peter; Horvath, Steve

    2008-12-29

    Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.

  10. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    Directory of Open Access Journals (Sweden)

    Fernando Gimeno Bellver

    Full Text Available In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems.The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software.Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper. Keywords: Electrical analogy, Network Simulation Method, Josephson junction, Chaos indicator, Fast Fourier Transform

  11. Evolving RBF neural networks for adaptive soft-sensor design.

    Science.gov (United States)

    Alexandridis, Alex

    2013-12-01

    This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.

  12. Using social networking to understand social networks: analysis of a mobile phone closed user group used by a Ghanaian health team.

    Science.gov (United States)

    Kaonga, Nadi Nina; Labrique, Alain; Mechael, Patricia; Akosah, Eric; Ohemeng-Dapaah, Seth; Sakyi Baah, Joseph; Kodie, Richmond; Kanter, Andrew S; Levine, Orin

    2013-04-03

    The network structure of an organization influences how well or poorly an organization communicates and manages its resources. In the Millennium Villages Project site in Bonsaaso, Ghana, a mobile phone closed user group has been introduced for use by the Bonsaaso Millennium Villages Project Health Team and other key individuals. No assessment on the benefits or barriers of the use of the closed user group had been carried out. The purpose of this research was to make the case for the use of social network analysis methods to be applied in health systems research--specifically related to mobile health. This study used mobile phone voice records of, conducted interviews with, and reviewed call journals kept by a mobile phone closed user group consisting of the Bonsaaso Millennium Villages Project Health Team. Social network analysis methodology complemented by a qualitative component was used. Monthly voice data of the closed user group from Airtel Bharti Ghana were analyzed using UCINET and visual depictions of the network were created using NetDraw. Interviews and call journals kept by informants were analyzed using NVivo. The methodology was successful in helping identify effective organizational structure. Members of the Health Management Team were the more central players in the network, rather than the Community Health Nurses (who might have been expected to be central). Social network analysis methodology can be used to determine the most productive structure for an organization or team, identify gaps in communication, identify key actors with greatest influence, and more. In conclusion, this methodology can be a useful analytical tool, especially in the context of mobile health, health services, and operational and managerial research.

  13. Methodical approach to training of IT-professionals based on networking

    Directory of Open Access Journals (Sweden)

    Vyacheslav V. Zolotarev

    2017-12-01

    Full Text Available Increasing requirements to the content and form of higher education in conditions of digital economy set new tasks for professors: formation of applied competences, the involvement of students in project activities, provision of students’ online support, their individual and project work. The growing load on university professors complicates satisfaction of these requirements. The development of the professors’ network interaction makes it possible to redistribute the load for disciplines methodological provision. The article reveals possibilities of professors’ network interaction by using innovative teaching methods including gaming forms and online courses. The research scientific novelty is to implement the professors’ network interaction and experimental application of innovative teaching methods. Network interaction was carried out through the educational process of students’ preparation in following areas: information security, applied information technology, business informatics.

  14. Neural Network-Based State Estimation for a Closed-Loop Control Strategy Applied to a Fed-Batch Bioreactor

    Directory of Open Access Journals (Sweden)

    Santiago Rómoli

    2017-01-01

    Full Text Available The lack of online information on some bioprocess variables and the presence of model and parametric uncertainties pose significant challenges to the design of efficient closed-loop control strategies. To address this issue, this work proposes an online state estimator based on a Radial Basis Function (RBF neural network that operates in closed loop together with a control law derived on a linear algebra-based design strategy. The proposed methodology is applied to a class of nonlinear systems with three types of uncertainties: (i time-varying parameters, (ii uncertain nonlinearities, and (iii unmodeled dynamics. To reduce the effect of uncertainties on the bioreactor, some integrators of the tracking error are introduced, which in turn allow the derivation of the proper control actions. This new control scheme guarantees that all signals are uniformly and ultimately bounded, and the tracking error converges to small values. The effectiveness of the proposed approach is illustrated on the basis of simulated experiments on a fed-batch bioreactor, and its performance is compared with two controllers available in the literature.

  15. Researching virtual worlds methodologies for studying emergent practices

    CERN Document Server

    Phillips, Louise

    2013-01-01

    This volume presents a wide range of methodological strategies that are designed to take into account the complex, emergent, and continually shifting character of virtual worlds. It interrogates how virtual worlds emerge as objects of study through the development and application of various methodological strategies. Virtual worlds are not considered objects that exist as entities with fixed attributes independent of our continuous engagement with them and interpretation of them. Instead, they are conceived of as complex ensembles of technology, humans, symbols, discourses, and economic structures, ensembles that emerge in ongoing practices and specific situations. A broad spectrum of perspectives and methodologies is presented: Actor-Network-Theory and post-Actor-Network-Theory, performativity theory, ethnography, discourse analysis, Sense-Making Methodology, visual ethnography, multi-sited ethnography, and Social Network Analysis.

  16. Digital processing methodology applied to exploring of radiological images; Metodologia de processamento digital aplicada a exploracao de imagens radiologicas

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Cristiane de Queiroz

    2004-07-01

    In this work, digital image processing is applied as a automatic computational method, aimed for exploring of radiological images. It was developed an automatic routine, from the segmentation and post-processing techniques to the radiology images acquired from an arrangement, consisting of a X-ray tube, target and filter of molybdenum, of 0.4 mm and 0.03 mm, respectively, and CCD detector. The efficiency of the methodology developed is showed in this work, through a case study, where internal injuries in mangoes are automatically detected and monitored. This methodology is a possible tool to be introduced in the post-harvest process in packing houses. A dichotomic test was applied to evaluate a efficiency of the method. The results show a success of 87.7% to correct diagnosis and 12.3% to failures to correct diagnosis with a sensibility of 93% and specificity of 80%. (author)

  17. Multi-criteria decision making with linguistic labels: a comparison of two methodologies applied to energy planning

    OpenAIRE

    Afsordegan, Arayeh; Sánchez Soler, Monica; Agell Jané, Núria; Cremades Oliver, Lázaro Vicente; Zahedi, Siamak

    2014-01-01

    This paper compares two multi-criteria decision making (MCDM) approaches based on linguistic label assessment. The first approach consists of a modified fuzzy TOPSIS methodology introduced by Kaya and Kahraman in 2011. The second approach, introduced by Agell et al. in 2012, is based on qualitative reasoning techniques for ranking multi-attribute alternatives in group decision-making with linguistic labels. Both approaches are applied to a case of assessment and selection of the most suita...

  18. Dynamical systems on networks a tutorial

    CERN Document Server

    Porter, Mason A

    2016-01-01

    This volume is a tutorial for the study of dynamical systems on networks. It discusses both methodology and models, including spreading models for social and biological contagions. The authors focus especially on “simple” situations that are analytically tractable, because they are insightful and provide useful springboards for the study of more complicated scenarios. This tutorial, which also includes key pointers to the literature, should be helpful for junior and senior undergraduate students, graduate students, and researchers from mathematics, physics, and engineering who seek to study dynamical systems on networks but who may not have prior experience with graph theory or networks. Mason A. Porter is Professor of Nonlinear and Complex Systems at the Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, UK. He is also a member of the CABDyN Complexity Centre and a Tutorial Fellow of Somerville College. James P. Gleeson is Professor of Industrial and Appli...

  19. Mathematical Modelling and Optimization of Cutting Force, Tool Wear and Surface Roughness by Using Artificial Neural Network and Response Surface Methodology in Milling of Ti-6242S

    Directory of Open Access Journals (Sweden)

    Erol Kilickap

    2017-10-01

    Full Text Available In this paper, an experimental study was conducted to determine the effect of different cutting parameters such as cutting speed, feed rate, and depth of cut on cutting force, surface roughness, and tool wear in the milling of Ti-6242S alloy using the cemented carbide (WC end mills with a 10 mm diameter. Data obtained from experiments were defined both Artificial Neural Network (ANN and Response Surface Methodology (RSM. ANN trained network using Levenberg-Marquardt (LM and weights were trained. On the other hand, the mathematical models in RSM were created applying Box Behnken design. Values obtained from the ANN and the RSM was found to be very close to the data obtained from experimental studies. The lowest cutting force and surface roughness were obtained at high cutting speeds and low feed rate and depth of cut. The minimum tool wear was obtained at low cutting speed, feed rate, and depth of cut.

  20. Matrix product algorithm for stochastic dynamics on networks applied to nonequilibrium Glauber dynamics

    Science.gov (United States)

    Barthel, Thomas; De Bacco, Caterina; Franz, Silvio

    2018-01-01

    We introduce and apply an efficient method for the precise simulation of stochastic dynamical processes on locally treelike graphs. Networks with cycles are treated in the framework of the cavity method. Such models correspond, for example, to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon ideas from quantum many-body theory, our approach is based on a matrix product approximation of the so-called edge messages—conditional probabilities of vertex variable trajectories. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the matrix product edge messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the algorithm has a better error scaling and works for both single instances as well as the thermodynamic limit. We employ it to examine prototypical nonequilibrium Glauber dynamics in the kinetic Ising model. Because of the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations.

  1. SCIENTIFIC METHODOLOGY FOR THE APPLIED SOCIAL SCIENCES: CRITICAL ANALYSES ABOUT RESEARCH METHODS, TYPOLOGIES AND CONTRIBUTIONS FROM MARX, WEBER AND DURKHEIM

    Directory of Open Access Journals (Sweden)

    Mauricio Corrêa da Silva

    2015-06-01

    Full Text Available This study aims to discuss the importance of the scientific method to conduct and advertise research in applied social sciences and research typologies, as well as to highlight contributions from Marx, Weber and Durkheim to the scientific methodology. To reach this objective, we conducted a review of the literature on the term research, the scientific method,the research techniques and the scientific methodologies. The results of the investigation revealed that it is fundamental that the academic investigator uses a scientific method to conduct and advertise his/her academic works in applied social sciences in comparison with the biochemical or computer sciences and in the indicated literature. Regarding the contributions to the scientific methodology, we have Marx, dialogued, the dialectical, striking analysis, explicative of social phenomenon, the need to understand the phenomena as historical and concrete totalities; Weber, the distinction between “facts” and “value judgments” to provide objectivity to the social sciences and Durkheim, the need to conceptualize very well its object of study, reject sensible data and imbue with the spirit of discovery and of being surprised with the results.

  2. Phosphoproteomics-based systems analysis of signal transduction networks

    Directory of Open Access Journals (Sweden)

    Hiroko eKozuka-Hata

    2012-01-01

    Full Text Available Signal transduction systems coordinate complex cellular information to regulate biological events such as cell proliferation and differentiation. Although the accumulating evidence on widespread association of signaling molecules has revealed essential contribution of phosphorylation-dependent interaction networks to cellular regulation, their dynamic behavior is mostly yet to be analyzed. Recent technological advances regarding mass spectrometry-based quantitative proteomics have enabled us to describe the comprehensive status of phosphorylated molecules in a time-resolved manner. Computational analyses based on the phosphoproteome dynamics accelerate generation of novel methodologies for mathematical analysis of cellular signaling. Phosphoproteomics-based numerical modeling can be used to evaluate regulatory network elements from a statistical point of view. Integration with transcriptome dynamics also uncovers regulatory hubs at the transcriptional level. These omics-based computational methodologies, which have firstly been applied to representative signaling systems such as the epidermal growth factor receptor pathway, have now opened up a gate for systems analysis of signaling networks involved in immune response and cancer.

  3. A replication and methodological critique of the study "Evaluating drug trafficking on the Tor Network"

    DEFF Research Database (Denmark)

    Munksgaard, Rasmus; Demant, Jakob Johan; Branwen, Gwern

    2016-01-01

    The development of cryptomarkets has gained increasing attention from academics, including growing scientific literature on the distribution of illegal goods using cryptomarkets. Dolliver's 2015 article “Evaluating drug trafficking on the Tor Network: Silk Road 2, the Sequel” addresses this theme...... by evaluating drug trafficking on one of the most well-known cryptomarkets, Silk Road 2.0. The research on cryptomarkets in general—particularly in Dolliver's article—poses a number of new questions for methodologies. This commentary is structured around a replication of Dolliver's original study...

  4. Risk and reliability assessment for telecommunications networks

    Energy Technology Data Exchange (ETDEWEB)

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-08-01

    Sandia National Laboratories has assembled an interdisciplinary team to explore the applicability of probabilistic logic modeling (PLM) techniques to model network reliability for a wide variety of communications network architectures. The authors have found that the reliability and failure modes of current generation network technologies can be effectively modeled using fault tree PLM techniques. They have developed a ``plug-and-play`` fault tree analysis methodology that can be used to model connectivity and the provision of network services in a wide variety of current generation network architectures. They have also developed an efficient search algorithm that can be used to determine the minimal cut sets of an arbitrarily-interconnected (non-hierarchical) network without the construction of a fault tree model. This paper provides an overview of these modeling techniques and describes how they are applied to networks that exhibit hybrid network structures (i.e., a network in which some areas are hierarchical and some areas are not hierarchical).

  5. A review of methodologies applied in Australian practice to evaluate long-term coastal adaptation options

    Directory of Open Access Journals (Sweden)

    Timothy David Ramm

    2017-01-01

    Full Text Available Rising sea levels have the potential to alter coastal flooding regimes around the world and local governments are beginning to consider how to manage uncertain coastal change. In doing so, there is increasing recognition that such change is deeply uncertain and unable to be reliably described with probabilities or a small number of scenarios. Characteristics of methodologies applied in Australian practice to evaluate long-term coastal adaptation options are reviewed and benchmarked against two state-of-the-art international methods suited for conditions of uncertainty (Robust Decision Making and Dynamic Adaptive Policy Pathways. Seven out of the ten Australian case studies assumed the uncertain parameters, such as sea level rise, could be described deterministically or stochastically when identifying risk and evaluating adaptation options across multi-decadal periods. This basis is not considered sophisticated enough for long-term decision-making, implying that Australian practice needs to increase the use of scenarios to explore a much larger uncertainty space when assessing the performance of adaptation options. Two Australian case studies mapped flexible adaptation pathways to manage uncertainty, and there remains an opportunity to incorporate quantitative methodologies to support the identification of risk thresholds. The contextual framing of risk, including the approach taken to identify risk (top-down or bottom-up and treatment of uncertain parameters, were found to be fundamental characteristics that influenced the methodology selected to evaluate adaptation options. The small sample of case studies available suggests that long-term coastal adaptation in Australian is in its infancy and there is a timely opportunity to guide local government towards robust methodologies for developing long-term coastal adaptation plans.

  6. Case Study: LCA Methodology Applied to Materials Management in a Brazilian Residential Construction Site

    Directory of Open Access Journals (Sweden)

    João de Lassio

    2016-01-01

    Full Text Available The construction industry is increasingly concerned with improving the social, economic, and environmental indicators of sustainability. More than ever, the growing demand for construction materials reflects increased consumption of raw materials and energy, particularly during the phases of extraction, processing, and transportation of materials. This work aims to help decision-makers and to promote life cycle thinking in the construction industry. For this purpose, the life cycle assessment (LCA methodology was chosen to analyze the environmental impacts of building materials used in the construction of a residence project in São Gonçalo, Rio de Janeiro, Brazil. The LCA methodology, based on ISO 14040 and ISO 14044 guidelines, is applied with available databases and the SimaPro program. As a result, this work shows that there is a substantial waste of nonrenewable energy, increasing global warming and harm to human health in this type of construction. This study also points out that, for this type of Brazilian construction, ceramic materials account for a high percentage of the mass of a total building and are thus responsible for the majority of environmental impacts.

  7. Applying distance-to-target weighing methodology to evaluate the environmental performance of bio-based energy, fuels, and materials

    International Nuclear Information System (INIS)

    Weiss, Martin; Patel, Martin; Heilmeier, Hermann; Bringezu, Stefan

    2007-01-01

    The enhanced use of biomass for the production of energy, fuels, and materials is one of the key strategies towards sustainable production and consumption. Various life cycle assessment (LCA) studies demonstrate the great potential of bio-based products to reduce both the consumption of non-renewable energy resources and greenhouse gas emissions. However, the production of biomass requires agricultural land and is often associated with adverse environmental effects such as eutrophication of surface and ground water. Decision making in favor of or against bio-based and conventional fossil product alternatives therefore often requires weighing of environmental impacts. In this article, we apply distance-to-target weighing methodology to aggregate LCA results obtained in four different environmental impact categories (i.e., non-renewable energy consumption, global warming potential, eutrophication potential, and acidification potential) to one environmental index. We include 45 bio- and fossil-based product pairs in our analysis, which we conduct for Germany. The resulting environmental indices for all product pairs analyzed range from -19.7 to +0.2 with negative values indicating overall environmental benefits of bio-based products. Except for three options of packaging materials made from wheat and cornstarch, all bio-based products (including energy, fuels, and materials) score better than their fossil counterparts. Comparing the median values for the three options of biomass utilization reveals that bio-energy (-1.2) and bio-materials (-1.0) offer significantly higher environmental benefits than bio-fuels (-0.3). The results of this study reflect, however, subjective value judgments due to the weighing methodology applied. Given the uncertainties and controversies associated not only with distance-to-target methodologies in particular but also with weighing approaches in general, the authors strongly recommend using weighing for decision finding only as a

  8. The Pediatric Emergency Care Applied Research Network: a history of multicenter collaboration in the United States.

    Science.gov (United States)

    Tzimenatos, Leah; Kim, Emily; Kuppermann, Nathan

    2015-01-01

    In this article, we review the history and progress of a large multicenter research network pertaining to emergency medical services for children. We describe the history, organization, infrastructure, and research agenda of the Pediatric Emergency Care Applied Research Network and highlight some of the important accomplishments since its inception. We also describe the network's strategy to grow its research portfolio, train new investigators, and study how to translate new evidence into practice. This strategy ensures not only the sustainability of the network in the future but the growth of research in emergency medical services for children in general.

  9. METHODOLOGY FOR GENERATION OF CORPORATE NETWORK HOSTNAME

    OpenAIRE

    Garrigós, Allan Mac Quinn; Sassi, Renato José

    2011-01-01

    The general concept of corporate network is made up of two or more interconnected computers sharing information, for the right functionality of the sharing. the nomenclature of these computers within the network is extremely important for proper organization of the names on Active Directory (AD -Domain Controller) and removing the duplicated names improperly created equal, removing the arrest of communications between machines with the same name on the network. The aim of this study was to de...

  10. A generic methodology for the design of sustainable carbon dioxide utilization processes using superstructure optimization

    DEFF Research Database (Denmark)

    Frauzem, Rebecca; Gani, Rafiqul

    , including as an extractive agent or raw material. Chemical conversion, an important element of utilization, involves the use of carbon dioxide as a reactant in the production of chemical compounds [2]. However, for feasible implementation, a systematic methodology is needed for the design of the utilization......, especially chemical conversion, processes. To achieve this, a generic methodology has been developed, which adopts a three-stage approach consisting in (i) process synthesis, (ii) process design, and (iii) innovative and sustainable design [3]. This methodology, with the individual steps and associated...... methods and tools, has been developed and applied to carbon dioxide utilization networks. This work will focus on the first stage, process synthesis, of this three-stage methodology; process synthesis is important in determining the appropriate processing route to produce products from a selection...

  11. Network analysis applications in hydrology

    Science.gov (United States)

    Price, Katie

    2017-04-01

    Applied network theory has seen pronounced expansion in recent years, in fields such as epidemiology, computer science, and sociology. Concurrent development of analytical methods and frameworks has increased possibilities and tools available to researchers seeking to apply network theory to a variety of problems. While water and nutrient fluxes through stream systems clearly demonstrate a directional network structure, the hydrological applications of network theory remain under­explored. This presentation covers a review of network applications in hydrology, followed by an overview of promising network analytical tools that potentially offer new insights into conceptual modeling of hydrologic systems, identifying behavioral transition zones in stream networks and thresholds of dynamical system response. Network applications were tested along an urbanization gradient in Atlanta, Georgia, USA. Peachtree Creek and Proctor Creek. Peachtree Creek contains a nest of five long­term USGS streamflow and water quality gages, allowing network application of long­term flow statistics. The watershed spans a range of suburban and heavily urbanized conditions. Summary flow statistics and water quality metrics were analyzed using a suite of network analysis techniques, to test the conceptual modeling and predictive potential of the methodologies. Storm events and low flow dynamics during Summer 2016 were analyzed using multiple network approaches, with an emphasis on tomogravity methods. Results indicate that network theory approaches offer novel perspectives for understanding long­ term and event­based hydrological data. Key future directions for network applications include 1) optimizing data collection, 2) identifying "hotspots" of contaminant and overland flow influx to stream systems, 3) defining process domains, and 4) analyzing dynamic connectivity of various system components, including groundwater­surface water interactions.

  12. Classification of brain compartments and head injury lesions by neural networks applied to MRI

    International Nuclear Information System (INIS)

    Kischell, E.R.; Kehtarnavaz, N.; Hillman, G.R.; Levin, H.; Lilly, M.; Kent, T.A.

    1995-01-01

    An automatic, neural network-based approach was applied to segment normal brain compartments and lesions on MR images. Two supervised networks, backpropagation (BPN) and counterpropagation, and two unsupervised networks, Kohonen learning vector quantizer and analog adaptive resonance theory, were trained on registered T2-weighted and proton density images. The classes of interest were background, gray matter, white matter, cerebrospinal fluid, macrocystic encephalomalacia, gliosis, and 'unknown'. A comprehensive feature vector was chosen to discriminate these classes. The BPN combined with feature conditioning, multiple discriminant analysis followed by Hotelling transform, produced the most accurate and consistent classification results. Classifications of normal brain compartments were generally in agreement with expert interpretation of the images. Macrocystic encephalomalacia and gliosis were recognized and, except around the periphery, classified in agreement with the clinician's report used to train the neural network. (orig.)

  13. Classification of brain compartments and head injury lesions by neural networks applied to MRI

    Energy Technology Data Exchange (ETDEWEB)

    Kischell, E R [Dept. of Electrical Engineering, Texas A and M Univ., College Station, TX (United States); Kehtarnavaz, N [Dept. of Electrical Engineering, Texas A and M Univ., College Station, TX (United States); Hillman, G R [Dept. of Pharmacology, Univ. of Texas Medical Branch, Galveston, TX (United States); Levin, H [Dept. of Neurosurgery, Univ. of Texas Medical Branch, Galveston, TX (United States); Lilly, M [Dept. of Neurosurgery, Univ. of Texas Medical Branch, Galveston, TX (United States); Kent, T A [Dept. of Neurology and Psychiatry, Univ. of Texas Medical Branch, Galveston, TX (United States)

    1995-10-01

    An automatic, neural network-based approach was applied to segment normal brain compartments and lesions on MR images. Two supervised networks, backpropagation (BPN) and counterpropagation, and two unsupervised networks, Kohonen learning vector quantizer and analog adaptive resonance theory, were trained on registered T2-weighted and proton density images. The classes of interest were background, gray matter, white matter, cerebrospinal fluid, macrocystic encephalomalacia, gliosis, and `unknown`. A comprehensive feature vector was chosen to discriminate these classes. The BPN combined with feature conditioning, multiple discriminant analysis followed by Hotelling transform, produced the most accurate and consistent classification results. Classifications of normal brain compartments were generally in agreement with expert interpretation of the images. Macrocystic encephalomalacia and gliosis were recognized and, except around the periphery, classified in agreement with the clinician`s report used to train the neural network. (orig.)

  14. Development of new methodology for dose calculation in photographic dosimetry

    International Nuclear Information System (INIS)

    Daltro, T.F.L.

    1994-01-01

    A new methodology for equivalent dose calculations has been developed at IPEN-CNEN/SP to be applied at the Photographic Dosimetry Laboratory using artificial intelligence techniques by means of neutral network. The research was orientated towards the optimization of the whole set of parameters involves in the film processing going from the irradiation in order to obtain the calibration curve up to the optical density readings. The learning of the neutral network was performed by taking the readings of optical density from calibration curve as input and the effective energy and equivalent dose as output. The obtained results in the intercomparison show an excellent agreement with the actual values of dose and energy given by the National Metrology Laboratory of Ionizing Radiation. (author)

  15. Development of new methodology for dose calculation in photographic dosimetry

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1994-01-01

    A new methodology for equivalent dose calculation has been developed at IPEN-CNEN/SP to be applied at the Photographic Dosimetry Laboratory using artificial intelligence techniques by means of neural network. The research was oriented towards the optimization of the whole set of parameters involved in the film processing going from the irradiation in order to obtain the calibration curve up to the optical density readings. The learning of the neural network was performed by taking readings of optical density from calibration curve as input and the effective energy and equivalent dose as output. The obtained results in the intercomparison show an excellent agreement with the actual values of dose and energy given by the National Metrology Laboratory of Ionizing Radiation

  16. Actor-Network Theory as a sociotechnical lens to explore the relationship of nurses and technology in practice: methodological considerations for nursing research.

    Science.gov (United States)

    Booth, Richard G; Andrusyszyn, Mary-Anne; Iwasiw, Carroll; Donelle, Lorie; Compeau, Deborah

    2016-06-01

    Actor-Network Theory is a research lens that has gained popularity in the nursing and health sciences domains. The perspective allows a researcher to describe the interaction of actors (both human and non-human) within networked sociomaterial contexts, including complex practice environments where nurses and health technology operate. This study will describe Actor-Network Theory and provide methodological considerations for researchers who are interested in using this sociotechnical lens within nursing and informatics-related research. Considerations related to technology conceptualization, levels of analysis, and sampling procedures in Actor-Network Theory based research are addressed. Finally, implications for future nursing research within complex environments are highlighted. © 2015 John Wiley & Sons Ltd.

  17. Separation and Determination of Honokiol and Magnolol in Chinese Traditional Medicines by Capillary Electrophoresis with the Application of Response Surface Methodology and Radial Basis Function Neural Network

    Science.gov (United States)

    Han, Ping; Luan, Feng; Yan, Xizu; Gao, Yuan; Liu, Huitao

    2012-01-01

    A method for the separation and determination of honokiol and magnolol in Magnolia officinalis and its medicinal preparation is developed by capillary zone electrophoresis and response surface methodology. The concentration of borate, content of organic modifier, and applied voltage are selected as variables. The optimized conditions (i.e., 16 mmol/L sodium tetraborate at pH 10.0, 11% methanol, applied voltage of 25 kV and UV detection at 210 nm) are obtained and successfully applied to the analysis of honokiol and magnolol in Magnolia officinalis and Huoxiang Zhengqi Liquid. Good separation is achieved within 6 min. The limits of detection are 1.67 µg/mL for honokiol and 0.83 µg/mL for magnolol, respectively. In addition, an artificial neural network with “3-7-1” structure based on the ratio of peak resolution to the migration time of the later component (Rs/t) given by Box-Behnken design is also reported, and the predicted results are in good agreement with the values given by the mathematic software and the experimental results. PMID:22291059

  18. A conceptual methodology to design a decision support system to leak detection programs in water networks

    International Nuclear Information System (INIS)

    Di Federico, V.; Bottarelli, M.; Di Federico, I.

    2005-01-01

    The paper outlines a conceptual methodology to develop a decision support system to assist technicians managing water networks in selecting the appropriate leak detection method(s). First, the necessary knowledge about the network is recapitulated: location and characteristics of its physical components, but also water demand, breaks in pipes, and water quality data. Second, the water balance in a typical Italian Agency is discussed, suggesting method and procedures to evacuate and/or estimate each term in the mass balance equation. Then the available methods for leak detection are described in detail, from those useful in the pre-localization phase to those commonly adopted to pinpoint pipe failures and allow a rapid repair. Criteria to estimate costs associated with each of these methods are provided. Finally, the proposed structure of the DSS is described [it

  19. Equity portfolio optimization: A DEA based methodology applied to the Zagreb Stock Exchange

    Directory of Open Access Journals (Sweden)

    Margareta Gardijan

    2015-10-01

    Full Text Available Most strategies for selection portfolios focus on utilizing solely market data and implicitly assume that stock markets communicate all relevant information to all market stakeholders, and that these markets cannot be influenced by investor activities. However convenient, this is a limited approach, especially when applied to small and illiquid markets such as the Croatian market, where such assumptions are hardly realistic. Thus, there is a demand for including other sources of data, such as financial reports. Research poses the question of whether financial ratios as criteria for stock selection are of any use to Croatian investors. Financial and market data from selected publicly companies listed on the Croatian capital market are used. A two-stage portfolio selection strategy is applied, where the first stage involves selecting stocks based on the respective Data Envelopment Analysis (DEA efficiency scores. DEA models are becoming popular in stock portfolio selection given that the methodology includes numerous models that provide a great flexibility in selecting inputs and outputs, which in turn are considered as criteria for portfolio selection. Accordingly, there is much room for improvement of the current proposed strategies for selecting portfolios. In the second stage, two portfolio-weighting strategies are applied using equal proportions and score-weighting. To show whether these strategies create outstanding out–of–sample portfolios in time, time-dependent DEA Window Analysis is applied using a reference time of one year, and portfolio returns are compared with the market portfolio for each period. It is found that the financial data are a significant indicator of the future performance of a stock and a DEA-based portfolio strategy outperforms market return.

  20. Controller tuning of district heating networks using experiment design techniques

    International Nuclear Information System (INIS)

    Dobos, Laszlo; Abonyi, Janos

    2011-01-01

    There are various governmental policies aimed at reducing the dependence on fossil fuels for space heating and the reduction in its associated emission of greenhouse gases. DHNs (District heating networks) could provide an efficient method for house and space heating by utilizing residual industrial waste heat. In such systems, heat is produced and/or thermally upgraded in a central plant and then distributed to the end users through a pipeline network. The control strategies of these networks are rather difficult thanks to the non-linearity of the system and the strong interconnection between the controlled variables. That is why a NMPC (non-linear model predictive controller) could be applied to be able to fulfill the heat demand of the consumers. The main objective of this paper is to propose a tuning method for the applied NMPC to fulfill the control goal as soon as possible. The performance of the controller is characterized by an economic cost function based on pre-defined operation ranges. A methodology from the field of experiment design is applied to tune the model predictive controller to reach the best performance. The efficiency of the proposed methodology is proven throughout a case study of a simulated NMPC controlled DHN. -- Highlights: → To improve the energetic and economic efficiency of a DHN an appropriate control system is necessary. → The time consumption of transitions can be shortened with the proper control system. → A NLMPC is proposed as control system. → The NLMPC is tuned by utilization of simplex methodology, using an economic oriented cost function. → The proposed NLMPC needs a detailed model of the DHN based on the physical description.

  1. Measuring alterations in oscillatory brain networks in schizophrenia with resting-state MEG: State-of-the-art and methodological challenges.

    Science.gov (United States)

    Alamian, Golnoush; Hincapié, Ana-Sofía; Pascarella, Annalisa; Thiery, Thomas; Combrisson, Etienne; Saive, Anne-Lise; Martel, Véronique; Althukov, Dmitrii; Haesebaert, Frédéric; Jerbi, Karim

    2017-09-01

    Neuroimaging studies provide evidence of disturbed resting-state brain networks in Schizophrenia (SZ). However, untangling the neuronal mechanisms that subserve these baseline alterations requires measurement of their electrophysiological underpinnings. This systematic review specifically investigates the contributions of resting-state Magnetoencephalography (MEG) in elucidating abnormal neural organization in SZ patients. A systematic literature review of resting-state MEG studies in SZ was conducted. This literature is discussed in relation to findings from resting-state fMRI and EEG, as well as to task-based MEG research in SZ population. Importantly, methodological limitations are considered and recommendations to overcome current limitations are proposed. Resting-state MEG literature in SZ points towards altered local and long-range oscillatory network dynamics in various frequency bands. Critical methodological challenges with respect to experiment design, and data collection and analysis need to be taken into consideration. Spontaneous MEG data show that local and global neural organization is altered in SZ patients. MEG is a highly promising tool to fill in knowledge gaps about the neurophysiology of SZ. However, to reach its fullest potential, basic methodological challenges need to be overcome. MEG-based resting-state power and connectivity findings could be great assets to clinical and translational research in psychiatry, and SZ in particular. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  2. Scalable Multi-core Architectures Design Methodologies and Tools

    CERN Document Server

    Jantsch, Axel

    2012-01-01

    As Moore’s law continues to unfold, two important trends have recently emerged. First, the growth of chip capacity is translated into a corresponding increase of number of cores. Second, the parallalization of the computation and 3D integration technologies lead to distributed memory architectures. This book provides a current snapshot of industrial and academic research, conducted as part of the European FP7 MOSART project, addressing urgent challenges in many-core architectures and application mapping.  It addresses the architectural design of many core chips, memory and data management, power management, design and programming methodologies. It also describes how new techniques have been applied in various industrial case studies. Describes trends towards distributed memory architectures and distributed power management; Integrates Network on Chip with distributed, shared memory architectures; Demonstrates novel design methodologies and frameworks for multi-core design space exploration; Shows how midll...

  3. ECU@Risk, a methodology for risk management applied to MSMEs

    Directory of Open Access Journals (Sweden)

    Esteban Crespo Martínez

    2017-02-01

    Full Text Available Information is the most valuable element for any organization or person in this new century, which, for many companies, is a competitive advantage asset (Vásquez & Gabalán, 2015. However, despite the lack of knowledge about how to protect it properly or the complexity of international standards that indicate procedures to achieve an adequate level of protection, many organizations, especially the MSMEs sector, fails to achieve this goal.Therefore, this study proposes a methodology for information security risk management, which is applicable to the business and organizational environment of the Ecuadorian MSME sector. For this purpose, we analyze several methodologies as Magerit, CRAMM (CCTA Risk Analysis and Management Method, OCTAVE-S, Microsoft Risk Guide, COBIT 5 COSO III. These methodologies are internationally used in risk management of information; in the light of the frameworks of the industry: ISO 27001, 27002, 27005 and 31000.

  4. A Methodology for the Optimization of Flow Rate Injection to Looped Water Distribution Networks through Multiple Pumping Stations

    Directory of Open Access Journals (Sweden)

    Christian León-Celi

    2016-12-01

    Full Text Available The optimal function of a water distribution network is reached when the consumer demands are satisfied using the lowest quantity of energy, maintaining the minimal pressure required at the same time. One way to achieve this is through optimization of flow rate injection based on the use of the setpoint curve concept. In order to obtain that, a methodology is proposed. It allows for the assessment of the flow rate and pressure head that each pumping station has to provide for the proper functioning of the network while the minimum power consumption is kept. The methodology can be addressed in two ways: the discrete method and the continuous method. In the first method, a finite set of combinations is evaluated between pumping stations. In the continuous method, the search for the optimal solution is performed using optimization algorithms. In this paper, Hooke–Jeeves and Nelder–Mead algorithms are used. Both the hydraulics and the objective function used by the optimization are solved through EPANET and its Toolkit. Two case studies are evaluated, and the results of the application of the different methods are discussed.

  5. Bioethics networks and reproduction technologies: theoretical and methodological controversies - DOI: 10.3395/reciis.v1i2.87en

    Directory of Open Access Journals (Sweden)

    Rosa Maria Leite Pedro

    2007-12-01

    Full Text Available The object of this paper is to discuss some of the theoretical and methodological controversies surrounding the emerging field of bioethics, especially focusing on reproduction biotechnologies, attempting to give some examples of its implications as a network of controversies. Initially, it presents the new reproduction biotechnologies in terms of the effect which they are producing on our understanding about human nature and life, as well as the context of the emergence of bioethics, traditionally conceived of as a critical and analytical example of the relationship between technology and humanity. As an alternative way of explaining these relationships, it outlines the aspect of bioethics as a network effect, in which the technology-society hybrid is shown both in the building of bioethical norms and in the instabilities which challenge these norms. As a way of understanding this heterogeneous and complex network, Controversy Analysis is proposed as a methodological tool. In order to illustrate the richness of such perspective, a brief empirical study is presented, in which an attempt is made to track controversies articulated around the relations between bioethics and reproduction biotechnologies, with a specific focus on stem cell research, as published by the on-line media from January of 2004 until July of 2006, raising questions about subjects such as: life, humanity, artifice and autonomy.

  6. System-level design methodologies for telecommunication

    CERN Document Server

    Sklavos, Nicolas; Goehringer, Diana; Kitsos, Paris

    2013-01-01

    This book provides a comprehensive overview of modern networks design, from specifications and modeling to implementations and test procedures, including the design and implementation of modern networks on chip, in both wireless and mobile applications.  Topical coverage includes algorithms and methodologies, telecommunications, hardware (including networks on chip), security and privacy, wireless and mobile networks and a variety of modern applications, such as VoLTE and the internet of things.

  7. Methodology for designing and manufacturing complex biologically inspired soft robotic fluidic actuators: prosthetic hand case study.

    Science.gov (United States)

    Thompson-Bean, E; Das, R; McDaid, A

    2016-10-31

    We present a novel methodology for the design and manufacture of complex biologically inspired soft robotic fluidic actuators. The methodology is applied to the design and manufacture of a prosthetic for the hand. Real human hands are scanned to produce a 3D model of a finger, and pneumatic networks are implemented within it to produce a biomimetic bending motion. The finger is then partitioned into material sections, and a genetic algorithm based optimization, using finite element analysis, is employed to discover the optimal material for each section. This is based on two biomimetic performance criteria. Two sets of optimizations using two material sets are performed. Promising optimized material arrangements are fabricated using two techniques to validate the optimization routine, and the fabricated and simulated results are compared. We find that the optimization is successful in producing biomimetic soft robotic fingers and that fabrication of the fingers is possible. Limitations and paths for development are discussed. This methodology can be applied for other fluidic soft robotic devices.

  8. Optimization of extraction of linarin from Flos chrysanthemi indici by response surface methodology and artificial neural network.

    Science.gov (United States)

    Pan, Hongye; Zhang, Qing; Cui, Keke; Chen, Guoquan; Liu, Xuesong; Wang, Longhu

    2017-05-01

    The extraction of linarin from Flos chrysanthemi indici by ethanol was investigated. Two modeling techniques, response surface methodology and artificial neural network, were adopted to optimize the process parameters, such as, ethanol concentration, extraction period, extraction frequency, and solvent to material ratio. We showed that both methods provided good predictions, but artificial neural network provided a better and more accurate result. The optimum process parameters include, ethanol concentration of 74%, extraction period of 2 h, extraction three times, solvent to material ratio of 12 mL/g. The experiment yield of linarin was 90.5% that deviated less than 1.6% from that obtained by predicted result. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Integrated environmental research and networking of economy and information in rural areas of Finland

    Directory of Open Access Journals (Sweden)

    M. LUOSTARINEN

    2008-12-01

    Full Text Available This article uses material from many extensive research projects starting from the construction of the electric power supply network and its water supply systems in northern Finland in 1973-1986, to the Agropolis agricultural strategy and networking for the Loimijoki project. A list of the material and references of the publications is available in Agronet on the Internet. All these projects applied integrated environmental research covering biology, the natural sciences, social sciences, and planning methodology. To be able to promote sustainable agriculture and rural development there is a pressing need to improve research methodology and applications for integrated environmental research. This article reviews the philosophy and development of the theory behind integrated environmental re-search and the theory of network economy.

  10. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  11. Single-point reactive power control method on voltage rise mitigation in residential networks with high PV penetration

    DEFF Research Database (Denmark)

    Hasheminamin, Maryam; Agelidis, Vassilios; Ahmadi, Abdollah

    2018-01-01

    Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise by ...... system with high r/x ratio. Efficacy, effectiveness and cost study of SPRPC is compared to droop control to evaluate its advantages.......Voltage rise (VR) due to reverse power flow is an important obstacle for high integration of Photovoltaic (PV) into residential networks. This paper introduces and elaborates a novel methodology of an index-based single-point-reactive power-control (SPRPC) methodology to mitigate voltage rise...... by absorbing adequate reactive power from one selected point. The proposed index utilizes short circuit analysis to select the best point to apply this Volt/Var control method. SPRPC is supported technically and financially by distribution network operator that makes it cost effective, simple and efficient...

  12. Neural Network with Local Memory for Nuclear Reactor Power Level Control

    International Nuclear Information System (INIS)

    Uluyol, Oender; Ragheb, Magdi; Tsoukalas, Lefteri

    2001-01-01

    A methodology is introduced for a neural network with local memory called a multilayered local output gamma feedback (LOGF) neural network within the paradigm of locally-recurrent globally-feedforward neural networks. It appears to be well-suited for the identification, prediction, and control tasks in highly dynamic systems; it allows for the presentation of different timescales through incorporation of a gamma memory. A learning algorithm based on the backpropagation-through-time approach is derived. The spatial and temporal weights of the network are iteratively optimized for a given problem using the derived learning algorithm. As a demonstration of the methodology, it is applied to the task of power level control of a nuclear reactor at different fuel cycle conditions. The results demonstrate that the LOGF neural network controller outperforms the classical as well as the state feedback-assisted classical controllers for reactor power level control by showing a better tracking of the demand power, improving the fuel and exit temperature responses, and by performing robustly in different fuel cycle and power level conditions

  13. Evaluating multiple determinants of the structure of plant-animal mutualistic networks.

    Science.gov (United States)

    Vázquez, Diego P; Chacoff, Natacha P; Cagnolo, Luciano

    2009-08-01

    The structure of mutualistic networks is likely to result from the simultaneous influence of neutrality and the constraints imposed by complementarity in species phenotypes, phenologies, spatial distributions, phylogenetic relationships, and sampling artifacts. We develop a conceptual and methodological framework to evaluate the relative contributions of these potential determinants. Applying this approach to the analysis of a plant-pollinator network, we show that information on relative abundance and phenology suffices to predict several aggregate network properties (connectance, nestedness, interaction evenness, and interaction asymmetry). However, such information falls short of predicting the detailed network structure (the frequency of pairwise interactions), leaving a large amount of variation unexplained. Taken together, our results suggest that both relative species abundance and complementarity in spatiotemporal distribution contribute substantially to generate observed network patters, but that this information is by no means sufficient to predict the occurrence and frequency of pairwise interactions. Future studies could use our methodological framework to evaluate the generality of our findings in a representative sample of study systems with contrasting ecological conditions.

  14. Strategies and methodologies for applied marine radioactivity studies

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The main objective of this document is to provide basic training in the theoretical background and practical applications of the methodologies for the measurement, monitoring and assessment of radioactivity in marine environment. This manual is a compilation of lectures and notes that have been presented at previous training courses. The document contains 16 individual papers, each of them was indexed separately.

  15. Strategies and methodologies for applied marine radioactivity studies

    International Nuclear Information System (INIS)

    1997-01-01

    The main objective of this document is to provide basic training in the theoretical background and practical applications of the methodologies for the measurement, monitoring and assessment of radioactivity in marine environment. This manual is a compilation of lectures and notes that have been presented at previous training courses. The document contains 16 individual papers, each of them was indexed separately

  16. The Photograph as Network

    DEFF Research Database (Denmark)

    Wiegand, Frauke Katharina

    2017-01-01

    Inspired by actor-network theory (ANT), this article develops a theoretical framework to grasp the dynamic visual work of memory. It introduces three sensitizing concepts of actor-network methodology, namely entanglement, relationality and traceability, and operationalizes them in a methodological...

  17. INFLUENCE OF APPLYING ADDITIONAL FORCING FANS FOR THE AIR DISTRIBUTION IN VENTILATION NETWORK

    Directory of Open Access Journals (Sweden)

    Nikodem SZLĄZAK

    2016-07-01

    Full Text Available Mining progress in underground mines cause the ongoing movement of working areas. Consequently, it becomes neces-sary to adapt the ventilation network of a mine to direct airflow into newly-opened districts. For economic reasons, opening new fields is often achieved via underground workings. Length of primary intake and return routes increases and also increases the total resistance of a complex ventilation network. The development of a subsurface structure can make it necessary to change the air distribution in a ventilation network. Increasing airflow into newly-opened districts is necessary. In mines where extraction does not entail gas-related hazards, there is possibility of implementing a push-pull ventilation system in order to supplement airflows to newly developed mining fields. This is achieved by installing sub-surface fan stations with forcing fans at the bottom of downcast shaft. In push-pull systems with multiple main fans, it is vital to select forcing fans with characteristic curves matching those of the existing exhaust fans to prevent undesirable mutual interaction. In complex ventilation networks it is necessary to calculate distribution of airflow (especially in net-works with a large number of installed fans. In the article the influence of applying additional forcing fans for the air distribution in ventilation network for underground mine were considered. There are also analysed the extent of over-pressure caused by the additional forcing fan in branches of the ventilation network (the operating range of additional forcing fan. Possibilities of increasing airflow rate in working areas were conducted.

  18. METHODOLOGICAL PROBLEMS OF E-LEARNING DIDACTICS

    Directory of Open Access Journals (Sweden)

    Sergey F. Sergeev

    2015-01-01

    Full Text Available The article is devoted to the discussion of the methodological problems of e-learning, didactic issues the use of advanced networking and Internet technologies to create training systems and simulators based on the methodological principles of non-classical and post-non-classical psychology and pedagogy. 

  19. Optimal sensor placement for leakage detection and isolation in water distribution networks

    OpenAIRE

    Rosich Oliva, Albert; Sarrate Estruch, Ramon; Nejjari Akhi-Elarab, Fatiha

    2012-01-01

    In this paper, the problem of leakage detection and isolation in water distribution networks is addressed applying an optimal sensor placement methodology. The chosen technique is based on structural models and thus it is suitable to handle non-linear and large scale systems. A drawback of this technique arises when costs are assigned uniformly. A main contribution of this paper is the proposal of an iterative methodology that focuses on identifying essential sensors which ultimately leads to...

  20. An input feature selection method applied to fuzzy neural networks for signal esitmation

    International Nuclear Information System (INIS)

    Na, Man Gyun; Sim, Young Rok

    2001-01-01

    It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods

  1. An Appraisal of Social Network Theory and Analysis as Applied to Public Health: Challenges and Opportunities.

    Science.gov (United States)

    Valente, Thomas W; Pitts, Stephanie R

    2017-03-20

    The use of social network theory and analysis methods as applied to public health has expanded greatly in the past decade, yielding a significant academic literature that spans almost every conceivable health issue. This review identifies several important theoretical challenges that confront the field but also provides opportunities for new research. These challenges include (a) measuring network influences, (b) identifying appropriate influence mechanisms, (c) the impact of social media and computerized communications, (d) the role of networks in evaluating public health interventions, and (e) ethics. Next steps for the field are outlined and the need for funding is emphasized. Recently developed network analysis techniques, technological innovations in communication, and changes in theoretical perspectives to include a focus on social and environmental behavioral influences have created opportunities for new theory and ever broader application of social networks to public health topics.

  2. An applied methodology for stakeholder identification in transdisciplinary research

    NARCIS (Netherlands)

    Leventon, Julia; Fleskens, Luuk; Claringbould, Heleen; Schwilch, Gudrun; Hessel, Rudi

    2016-01-01

    In this paper we present a novel methodology for identifying stakeholders for the purpose of engaging with them in transdisciplinary, sustainability research projects. In transdisciplinary research, it is important to identify a range of stakeholders prior to the problem-focussed stages of

  3. Applying living lab methodology to enhance skills in innovation

    CSIR Research Space (South Africa)

    Herselman, M

    2010-07-01

    Full Text Available and which is also inline with the South African medium term strategic framework and the millennium goals of the Department of Science and Technology. Evidence of how the living lab methodology can enhance innovation skills was made clear during various...

  4. Investigation of rotated PCA from the perspective of network communities applied to climate data

    Czech Academy of Sciences Publication Activity Database

    Hartman, David; Hlinka, Jaroslav; Vejmelka, Martin; Paluš, Milan

    2013-01-01

    Roč. 15, - (2013), s. 13124 ISSN 1607-7962. [European Geosciences Union General Assembly 2013. 07.04.2013-12.04.2013, Vienna] R&D Projects: GA ČR GCP103/11/J068 Institutional support: RVO:67985807 Keywords : complex networks * graph theory * climate dynamics Subject RIV: BB - Applied Statistics, Operational Research

  5. The relation between global migration and trade networks

    Science.gov (United States)

    Sgrignoli, Paolo; Metulini, Rodolfo; Schiavo, Stefano; Riccaboni, Massimo

    2015-01-01

    In this paper we develop a methodology to analyze and compare multiple global networks, focusing our analysis on the relation between human migration and trade. First, we identify the subset of products for which the presence of a community of migrants significantly increases trade intensity, where to assure comparability across networks we apply a hypergeometric filter that lets us identify those links which intensity is significantly higher than expected. Next, proposing a new way to define country neighbors based on the most intense links in the trade network, we use spatial econometrics techniques to measure the effect of migration on international trade, while controlling for network interdependences. Overall, we find that migration significantly boosts trade across countries and we are able to identify product categories for which this effect is particularly strong.

  6. Determining the number of kanbans for dynamic production systems: An integrated methodology

    Directory of Open Access Journals (Sweden)

    Özlem Uzun Araz

    2016-08-01

    Full Text Available Just-in-time (JIT is a management philosophy that reduces the inventory levels and eliminates manufacturing wastes by producing only the right quantity at the right time. A kanban system is one of the key elements of JIT philosophy. Kanbans are used to authorize production and to control movement of materials in JIT systems. In Kanban systems, the efficiency of the manufacturing system depends on several factors such as number of kanbans, container size etc. Hence, determining the number of kanbans is a critical decision in Kanban systems. The aim of this study is to develop a methodology that can be used in order to determine the number of kanbans in a dynamic production environment. In this methodology, the changes in system state is monitored in real time manner, and the number of the kanbans are dynamically re-arranged. The proposed methodology integrates simulation, neural networks and Mamdani type fuzzy inference system. The methodology is modelled in simulation environment and applied on a hypothetic production system. We also performed several comparisons for different control policies to show the effectiveness of the proposed methodology.

  7. Abnormal quality detection and isolation in water distribution networks using simulation models

    Directory of Open Access Journals (Sweden)

    F. Nejjari

    2012-11-01

    Full Text Available This paper proposes a model based detection and localisation method to deal with abnormal quality levels based on the chlorine measurements and chlorine sensitivity analysis in a water distribution network. A fault isolation algorithm which correlates on line the residuals (generated by comparing the available chlorine measurements with their estimations using a model with the fault sensitivity matrix is used. The proposed methodology has been applied to a District Metered Area (DMA in the Barcelona network.

  8. Auditing SNOMED Relationships Using a Converse Abstraction Network

    Science.gov (United States)

    Wei, Duo; Halper, Michael; Elhanan, Gai; Chen, Yan; Perl, Yehoshua; Geller, James; Spackman, Kent A.

    2009-01-01

    In SNOMED CT, a given kind of attribute relationship is defined between two hierarchies, a source and a target. Certain hierarchies (or subhierarchies) serve only as targets, with no outgoing relationships of their own. However, converse relationships—those pointing in a direction opposite to the defined relationships—while not explicitly represented in SNOMED’s inferred view, can be utilized in forming an alternative view of a source. In particular, they can help shed light on a source hierarchy’s overall relationship structure. Toward this end, an abstraction network, called the converse abstraction network (CAN), derived automatically from a given SNOMED hierarchy is presented. An auditing methodology based on the CAN is formulated. The methodology is applied to SNOMED’s Device subhierarchy and the related device relationships of the Procedure hierarchy. The results indicate that the CAN is useful in finding opportunities for refining and improving SNOMED. PMID:20351941

  9. Application of artificial neural network with extreme learning machine for economic growth estimation

    Science.gov (United States)

    Milačić, Ljubiša; Jović, Srđan; Vujović, Tanja; Miljković, Jovica

    2017-01-01

    The purpose of this research is to develop and apply the artificial neural network (ANN) with extreme learning machine (ELM) to forecast gross domestic product (GDP) growth rate. The economic growth forecasting was analyzed based on agriculture, manufacturing, industry and services value added in GDP. The results were compared with ANN with back propagation (BP) learning approach since BP could be considered as conventional learning methodology. The reliability of the computational models was accessed based on simulation results and using several statistical indicators. Based on results, it was shown that ANN with ELM learning methodology can be applied effectively in applications of GDP forecasting.

  10. The added value of thorough economic evaluation of telemedicine networks.

    Science.gov (United States)

    Le Goff-Pronost, Myriam; Sicotte, Claude

    2010-02-01

    This paper proposes a thorough framework for the economic evaluation of telemedicine networks. A standard cost analysis methodology was used as the initial base, similar to the evaluation method currently being applied to telemedicine, and to which we suggest adding subsequent stages that enhance the scope and sophistication of the analytical methodology. We completed the methodology with a longitudinal and stakeholder analysis, followed by the calculation of a break-even threshold, a calculation of the economic outcome based on net present value (NPV), an estimate of the social gain through external effects, and an assessment of the probability of social benefits. In order to illustrate the advantages, constraints and limitations of the proposed framework, we tested it in a paediatric cardiology tele-expertise network. The results demonstrate that the project threshold was not reached after the 4 years of the study. Also, the calculation of the project's NPV remained negative. However, the additional analytical steps of the proposed framework allowed us to highlight alternatives that can make this service economically viable. These included: use over an extended period of time, extending the network to other telemedicine specialties, or including it in the services offered by other community hospitals. In sum, the results presented here demonstrate the usefulness of an economic evaluation framework as a way of offering decision makers the tools they need to make comprehensive evaluations of telemedicine networks.

  11. “Thanks for sharing”—Identifying users’ roles based on knowledge contribution in Enterprise Social Networks

    DEFF Research Database (Denmark)

    Cetto, Alexandra; Klier, Mathias; Richter, Alexander

    2018-01-01

    in the network and help others to get their work done. In this paper, we propose a new methodological approach consisting of three steps, namely “message classification”, “identification of users’ roles” as well as “characterization of users’ roles”. We apply the approach to a dataset from a multinational......, are a central element of the network. In conclusion, the development and application of a new methodological approach allows us to contribute to a more refined understanding of users’ knowledge exchanging behavior in Enterprise Social Networks which can ultimately help companies to take measures to improve......While ever more companies use Enterprise Social Networks for knowledge management, there is still a lack of understanding of users’ knowledge exchanging behavior. In this context, it is important to be able to identify and characterize users who contribute and communicate their knowledge...

  12. An Intelligent Network Proposed for Assessing Seismic Vulnerability Index of Sewerage Networks within a GIS Framework (A Case Study of Shahr-e-Kord

    Directory of Open Access Journals (Sweden)

    Mohamadali Rahgozar

    2016-01-01

    Full Text Available Due to their vast spread, sewerage networks are exposed to considerable damages during severe earthquakes, which may lead to catastrophic environmental contamination. Multiple repairs in the pipelines, including pipe and joint fractures, could be costly and time-consuming. In seismic risk management, it is of utmost importance to have an intelligent tool for assessing seismic vulnerability index at any given point in time for such important utilities as sewerage networks. This study uses a weight-factor methodology and proposes an online GIS-based intelligent algorithm to evaluate the seismic vulnerability index (VI for metropolitan sewerage networks. The proposed intelligent tool is capable of updating VI as the sewerage network conditions may change with time and at different locations. The city of Shahr-e-Kord located on the high risk seismic belt is selected for a case study to which the proposed methodology is applied for zoning the vulnerability index in GIS. Results show that the overall seismic vulnerability index for the selected study area ranges from low to medium but that it increases in the southern parts of the city, especially in the old town where brittle pipes have been laid

  13. Uncovering the Transnational Networks, Organisational Techniques and State-Corporate Ties Behind Grand Corruption: Building an Investigative Methodology

    Directory of Open Access Journals (Sweden)

    Kristian Lasslett

    2017-11-01

    Full Text Available While grand corruption is a major global governance challenge, researchers notably lack a systematic methodology for conducting qualitative research into its complex forms. To address this lacuna, the following article sets out and applies the corruption investigative framework (CIF, a methodology designed to generate a systematic, transferable approach for grand corruption research. Its utility will be demonstrated employing a case study that centres on an Australian-led megaproject being built in Papua New Guinea’s capital city, Port Moresby. Unlike conventional analyses of corruption in Papua New Guinea, which emphasise its local characteristics and patrimonial qualities, application of CIF uncovered new empirical layers that centre on transnational state-corporate power, the ambiguity of civil society, and the structural inequalities that marginalise resistance movements. The important theoretical consequences of the findings and underpinning methodology are explored.

  14. Boolean modeling in systems biology: an overview of methodology and applications

    International Nuclear Information System (INIS)

    Wang, Rui-Sheng; Albert, Réka; Saadatpour, Assieh

    2012-01-01

    Mathematical modeling of biological processes provides deep insights into complex cellular systems. While quantitative and continuous models such as differential equations have been widely used, their use is obstructed in systems wherein the knowledge of mechanistic details and kinetic parameters is scarce. On the other hand, a wealth of molecular level qualitative data on individual components and interactions can be obtained from the experimental literature and high-throughput technologies, making qualitative approaches such as Boolean network modeling extremely useful. In this paper, we build on our research to provide a methodology overview of Boolean modeling in systems biology, including Boolean dynamic modeling of cellular networks, attractor analysis of Boolean dynamic models, as well as inferring biological regulatory mechanisms from high-throughput data using Boolean models. We finally demonstrate how Boolean models can be applied to perform the structural analysis of cellular networks. This overview aims to acquaint life science researchers with the basic steps of Boolean modeling and its applications in several areas of systems biology. (paper)

  15. Building research infrastructure in community health centers: a Community Health Applied Research Network (CHARN) report.

    Science.gov (United States)

    Likumahuwa, Sonja; Song, Hui; Singal, Robbie; Weir, Rosy Chang; Crane, Heidi; Muench, John; Sim, Shao-Chee; DeVoe, Jennifer E

    2013-01-01

    This article introduces the Community Health Applied Research Network (CHARN), a practice-based research network of community health centers (CHCs). Established by the Health Resources and Services Administration in 2010, CHARN is a network of 4 community research nodes, each with multiple affiliated CHCs and an academic center. The four nodes (18 individual CHCs and 4 academic partners in 9 states) are supported by a data coordinating center. Here we provide case studies detailing how CHARN is building research infrastructure and capacity in CHCs, with a particular focus on how community practice-academic partnerships were facilitated by the CHARN structure. The examples provided by the CHARN nodes include many of the building blocks of research capacity: communication capacity and "matchmaking" between providers and researchers; technology transfer; research methods tailored to community practice settings; and community institutional review board infrastructure to enable community oversight. We draw lessons learned from these case studies that we hope will serve as examples for other networks, with special relevance for community-based networks seeking to build research infrastructure in primary care settings.

  16. Applied Electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Yamashita, H; Marinova, I; Cingoski, V [eds.

    2002-07-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics.

  17. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  18. Theory of information warfare: basic framework, methodology and conceptual apparatus

    Directory of Open Access Journals (Sweden)

    Олександр Васильович Курбан

    2015-11-01

    Full Text Available It is conducted a comprehensive theoretical study and determine the basic provisions of the modern theory of information warfare in on-line social networks. Three basic blocks, which systematized the theoretical and methodological basis of the topic, are established. There are information and psychological war, social off-line and on-line network. According to the three blocks, theoretical concepts are defined and methodological substantiation of information processes within the information warfare in the social on-line networks is formed

  19. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  20. Convergent dynamics for multistable delayed neural networks

    International Nuclear Information System (INIS)

    Shih, Chih-Wen; Tseng, Jui-Pin

    2008-01-01

    This investigation aims at developing a methodology to establish convergence of dynamics for delayed neural network systems with multiple stable equilibria. The present approach is general and can be applied to several network models. We take the Hopfield-type neural networks with both instantaneous and delayed feedbacks to illustrate the idea. We shall construct the complete dynamical scenario which comprises exactly 2 n stable equilibria and exactly (3 n − 2 n ) unstable equilibria for the n-neuron network. In addition, it is shown that every solution of the system converges to one of the equilibria as time tends to infinity. The approach is based on employing the geometrical structure of the network system. Positively invariant sets and componentwise dynamical properties are derived under the geometrical configuration. An iteration scheme is subsequently designed to confirm the convergence of dynamics for the system. Two examples with numerical simulations are arranged to illustrate the present theory

  1. Infering and Calibrating Triadic Closure in a Dynamic Network

    Science.gov (United States)

    Mantzaris, Alexander V.; Higham, Desmond J.

    In the social sciences, the hypothesis of triadic closure contends that new links in a social contact network arise preferentially between those who currently share neighbours. Here, in a proof-of-principle study, we show how to calibrate a recently proposed evolving network model to time-dependent connectivity data. The probabilistic edge birth rate in the model contains a triadic closure term, so we are also able to assess statistically the evidence for this effect. The approach is shown to work on data generated synthetically from the model. We then apply this methodology to some real, large-scale data that records the build up of connections in a business-related social networking site, and find evidence for triadic closure.

  2. How to assess solid waste management in armed conflicts? A new methodology applied to the Gaza Strip, Palestine.

    Science.gov (United States)

    Caniato, Marco; Vaccari, Mentore

    2014-09-01

    We have developed a new methodology for assessing solid waste management in a situation of armed conflict. This methodology is composed of six phases with specific activities, and suggested methods and tools. The collection, haulage, and disposal of waste in low- and middle-income countries is so complicated and expensive task for municipalities, owing to several challenges involved, that some waste is left in illegal dumps. Armed conflicts bring further constraints, such as instability, the sudden increase in violence, and difficulty in supplying equipment and spare parts: planning is very difficult and several projects aimed at improving the situation have failed. The methodology was validated in the Gaza Strip, where the geopolitical situation heavily affects natural resources. We collected information in a holistic way, crosschecked, and discussed it with local experts, practitioners, and authorities. We estimated that in 2011 only 1300 tonne day(-1) were transported to the three disposal sites, out of a production exceeding 1700. Recycling was very limited, while the composting capacity was 3.5 tonnes day(-1), but increasing. We carefully assessed system elements and their interaction. We identified the challenges, and developed possible solutions to increase system effectiveness and robustness. The case study demonstrated that our methodology is flexible and adaptable to the context, thus it could be applied in other areas to improve the humanitarian response in similar situations. © The Author(s) 2014.

  3. Developing Visualization Techniques for Semantics-based Information Networks

    Science.gov (United States)

    Keller, Richard M.; Hall, David R.

    2003-01-01

    Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.

  4. Virtual target tracking (VTT) as applied to mobile satellite communication networks

    Science.gov (United States)

    Amoozegar, Farid

    1999-08-01

    Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.

  5. Applied data communications and networks

    CERN Document Server

    Buchanan, W

    1996-01-01

    The usage of data communications and computer networks are ever in­ creasing. It is one of the few technological areas which brings benefits to most of the countries and the peoples of the world. Without it many industries could not exist. It is the objective of this book to discuss data communications in a readable form that students and professionals all over the world can understand. As much as possible the text uses dia­ grams to illustrate key points. Most currently available data communications books take their view­ point from either a computer scientists top-down approach or from an electronic engineers bottom-up approach. This book takes a practical ap­ proach and supports it with a theoretical background to create a textbook which can be used by electronic engineers, computer engineers, computer scientists and industry professionals. It discusses most of the current and future key data communications technologies, including: • Data Communications Standards and Models; • Local Area Networks (...

  6. Application of Bayesian network methodology to the probabilistic risk assessment of nuclear waste disposal facility

    International Nuclear Information System (INIS)

    Lee, Chang Ju

    2006-02-01

    The scenario in a risk analysis can be defined as the propagating feature of specific initiating event which can go to a wide range of undesirable consequences. If one takes various scenarios into consideration, the risk analysis becomes more complex than do without them. A lot of risk analyses have been performed to actually estimate a risk profile under both uncertain future states of hazard sources and undesirable scenarios. Unfortunately, in case of considering some stochastic passive systems such as a radioactive waste disposal facility, since the behaviour of future scenarios is hardly predicted without special reasoning process, we cannot estimate their risk only with a traditional risk analysis methodology. Moreover, it is believed that the sources of uncertainty at future states can be reduced pertinently by setting up dependency relationships interrelating geological, hydrological, and ecological aspects of the site with all the scenarios. It is then required current methodology of uncertainty analysis of the waste disposal facility be revisited under this belief. In order to consider the effects predicting from an evolution of environmental conditions of waste disposal facilities, this study proposes a quantitative assessment framework integrating the inference process of Bayesian network to the traditional probabilistic risk analysis. In this study an approximate probabilistic inference program for the specific Bayesian network developed and verified using a bounded-variance likelihood weighting algorithm. Ultimately, specific models, including a Monte-Carlo model for uncertainty propagation of relevant parameters, were developed with a comparison of variable-specific effects due to the occurrence of diverse altered evolution scenarios (AESs). After providing supporting information to get a variety of quantitative expectations about the dependency relationship between domain variables and AESs, this study could connect the results of probabilistic

  7. A Proven Methodology for Developing Secure Software and Applying It to Ground Systems

    Science.gov (United States)

    Bailey, Brandon

    2016-01-01

    Part Two expands upon Part One in an attempt to translate the methodology for ground system personnel. The goal is to build upon the methodology presented in Part One by showing examples and details on how to implement the methodology. Section 1: Ground Systems Overview; Section 2: Secure Software Development; Section 3: Defense in Depth for Ground Systems; Section 4: What Now?

  8. Methodology for evaluation of alternative technologies applied to nuclear fuel reprocessing

    International Nuclear Information System (INIS)

    Selvaduray, G.S.; Goldstein, M.K.; Anderson, R.N.

    1977-07-01

    An analytic methodology has been developed to compare the performance of various nuclear fuel reprocessing techniques for advanced fuel cycle applications including low proliferation risk systems. The need to identify and to compare those processes, which have the versatility to handle the variety of fuel types expected to be in use in the next century, is becoming increasingly imperative. This methodology allows processes in any stage of development to be compared and to assess the effect of changing external conditions on the process

  9. Applying network theory to animal movements to identify properties of landscape space use.

    Science.gov (United States)

    Bastille-Rousseau, Guillaume; Douglas-Hamilton, Iain; Blake, Stephen; Northrup, Joseph M; Wittemyer, George

    2018-04-01

    Network (graph) theory is a popular analytical framework to characterize the structure and dynamics among discrete objects and is particularly effective at identifying critical hubs and patterns of connectivity. The identification of such attributes is a fundamental objective of animal movement research, yet network theory has rarely been applied directly to animal relocation data. We develop an approach that allows the analysis of movement data using network theory by defining occupied pixels as nodes and connection among these pixels as edges. We first quantify node-level (local) metrics and graph-level (system) metrics on simulated movement trajectories to assess the ability of these metrics to pull out known properties in movement paths. We then apply our framework to empirical data from African elephants (Loxodonta africana), giant Galapagos tortoises (Chelonoidis spp.), and mule deer (Odocoileous hemionus). Our results indicate that certain node-level metrics, namely degree, weight, and betweenness, perform well in capturing local patterns of space use, such as the definition of core areas and paths used for inter-patch movement. These metrics were generally applicable across data sets, indicating their robustness to assumptions structuring analysis or strategies of movement. Other metrics capture local patterns effectively, but were sensitive to specified graph properties, indicating case specific applications. Our analysis indicates that graph-level metrics are unlikely to outperform other approaches for the categorization of general movement strategies (central place foraging, migration, nomadism). By identifying critical nodes, our approach provides a robust quantitative framework to identify local properties of space use that can be used to evaluate the effect of the loss of specific nodes on range wide connectivity. Our network approach is intuitive, and can be implemented across imperfectly sampled or large-scale data sets efficiently, providing a

  10. An objective methodology for the evaluation of the air quality stations positioning

    International Nuclear Information System (INIS)

    Benassi, A.; Marson, G.; Baraldo, E.; Dalan, F.; Lorenzet, K.; Bellasio, R.; Bianconi, R.

    2006-01-01

    This work describes a methodology for the evaluation of the correct positioning of the monitoring stations of an air quality network. The methodology is based on the Italian legislation, the European Directives and on some technical documents used as guidelines at European level. The paper describes all the assumption on which the methodology is based and the results of its application to the air quality network of Region Veneto (Italy) [it

  11. AEGIS methodology and a perspective from AEGIS methodology demonstrations

    International Nuclear Information System (INIS)

    Dove, F.H.

    1981-03-01

    Objectives of AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) are to develop the capabilities needed to assess the post-closure safety of waste isolation in geologic formation; demonstrate these capabilities on reference sites; apply the assessment methodology to assist the NWTS program in site selection, waste package and repository design; and perform repository site analyses for the licensing needs of NWTS. This paper summarizes the AEGIS methodology, the experience gained from methodology demonstrations, and provides an overview in the following areas: estimation of the response of a repository to perturbing geologic and hydrologic events; estimation of the transport of radionuclides from a repository to man; and assessment of uncertainties

  12. Backbone of complex networks of corporations: The flow of control

    Science.gov (United States)

    Glattfelder, J. B.; Battiston, S.

    2009-09-01

    We present a methodology to extract the backbone of complex networks based on the weight and direction of links, as well as on nontopological properties of nodes. We show how the methodology can be applied in general to networks in which mass or energy is flowing along the links. In particular, the procedure enables us to address important questions in economics, namely, how control and wealth are structured and concentrated across national markets. We report on the first cross-country investigation of ownership networks, focusing on the stock markets of 48 countries around the world. On the one hand, our analysis confirms results expected on the basis of the literature on corporate control, namely, that in Anglo-Saxon countries control tends to be dispersed among numerous shareholders. On the other hand, it also reveals that in the same countries, control is found to be highly concentrated at the global level, namely, lying in the hands of very few important shareholders. Interestingly, the exact opposite is observed for European countries. These results have previously not been reported as they are not observable without the kind of network analysis developed here.

  13. Declarative Networking

    CERN Document Server

    Loo, Boon Thau

    2012-01-01

    Declarative Networking is a programming methodology that enables developers to concisely specify network protocols and services, which are directly compiled to a dataflow framework that executes the specifications. Declarative networking proposes the use of a declarative query language for specifying and implementing network protocols, and employs a dataflow framework at runtime for communication and maintenance of network state. The primary goal of declarative networking is to greatly simplify the process of specifying, implementing, deploying and evolving a network design. In addition, decla

  14. Risk management methodology applied at thermal power plant

    International Nuclear Information System (INIS)

    Coppolino, R.

    2007-01-01

    Nowadays, the responsibility of the environmental risks, connected the productive processes and to the products of an enterprise, represent one of the main aspects which an adequate management approach has to foresee. In this paper it has been evaluated the guidelines followed by Edipower Thermoelectric Power plant of S. Filippo di Mela (ME). These guidelines were given in order to manage the chemical risk connected to the usage of various chemicals with which the workers get in touch when identifying the risks of the methodology introduced by the AZ/NZS 4360:2004 Risk Management Standard

  15. A methodology for extracting knowledge rules from artificial neural networks applied to forecast demand for electric power; Uma metodologia para extracao de regras de conhecimento a partir de redes neurais artificiais aplicadas para previsao de demanda por energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Steinmetz, Tarcisio; Souza, Glauber; Ferreira, Sandro; Santos, Jose V. Canto dos; Valiati, Joao [Universidade do Vale do Rio dos Sinos (PIPCA/UNISINOS), Sao Leopoldo, RS (Brazil). Programa de Pos-Graduacao em Computacao Aplicada], Emails: trsteinmetz@unisinos.br, gsouza@unisinos.br, sferreira, jvcanto@unisinos.br, jfvaliati@unisinos.br

    2009-07-01

    We present a methodology for the extraction of rules from Artificial Neural Networks (ANN) trained to forecast the electric load demand. The rules have the ability to express the knowledge regarding the behavior of load demand acquired by the ANN during the training process. The rules are presented to the user in an easy to read format, such as IF premise THEN consequence. Where premise relates to the input data submitted to the ANN (mapped as fuzzy sets), and consequence appears as a linear equation describing the output to be presented by the ANN, should the premise part holds true. Experimentation demonstrates the method's capacity for acquiring and presenting high quality rules from neural networks trained to forecast electric load demand for several amounts of time in the future. (author)

  16. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  17. Assessing the integrity of local area network materials accountability systems against insider threats

    International Nuclear Information System (INIS)

    Jones, E.; Sicherman, A.

    1996-07-01

    DOE facilities rely increasingly on computerized systems to manage nuclear materials accountability data and to protect against diversion of nuclear materials or other malevolent acts (e.g., hoax due to falsified data) by insider threats. Aspects of modern computerized material accountability (MA) systems including powerful personal computers and applications on networks, mixed security environments, and more users with increased knowledge, skills and abilities help heighten the concern about insider threats to the integrity of the system. In this paper, we describe a methodology for assessing MA applications to help decision makers identify ways of and compare options for preventing or mitigating possible additional risks from the insider threat. We illustrate insights from applying the methodology to local area network materials accountability systems

  18. Applying Transformative Service Design to Develop Brand Community Service in Women, Children and Infants Retailing

    OpenAIRE

    Shian Wan; Yi-Chang Wang; Yu-Chien Lin

    2016-01-01

    This research discussed the various theories of service design, the importance of service design methodology, and the development of transformative service design framework. In this study, transformative service design is applied while building a new brand community service for women, children and infants retailing business. The goal is to enhance the brand recognition and customer loyalty, effectively increase the brand community engagement by embedding the brand community in social network ...

  19. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Methodological aspects of market study on residential, commercial and industrial sectors, of the Conversion Project for natural gas of existing network in Sao Paulo city

    International Nuclear Information System (INIS)

    Kishinami, R.I.; Perazza, A.A.

    1991-01-01

    The methodological aspects of market study, developed at the geographical area served by existing network of naphtha gas, which will be converted to natural gas in a two years conversion program are presented. (author)

  1. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    International Nuclear Information System (INIS)

    Correa, R.; Chesta, M.A.; Morales, J.R.; Dinator, M.I.; Requena, I.; Vila, I.

    2006-01-01

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses

  2. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Correa, R. [Universidad Tecnologica Metropolitana, Departamento de Fisica, Av. Jose Pedro Alessandri 1242, Nunoa, Santiago (Chile)]. E-mail: rcorrea@utem.cl; Chesta, M.A. [Universidad Nacional de Cordoba, Facultad de Matematica, Astronomia y Fisica, Medina Allende s/n Ciudad Universitaria, 5000 Cordoba (Argentina)]. E-mail: chesta@famaf.unc.edu.ar; Morales, J.R. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: rmorales@uchile.cl; Dinator, M.I. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: mdinator@uchile.cl; Requena, I. [Universidad de Granada, Departamento de Ciencias de la Computacion e Inteligencia Artificial, Daniel Saucedo Aranda s/n, 18071 Granada (Spain)]. E-mail: requena@decsai.ugr.es; Vila, I. [Universidad de Chile, Facultad de Ciencias, Departamento de Ecologia, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: limnolog@uchile.cl

    2006-08-15

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses.

  3. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  4. Entrepreneurs and business networks in contemporary Andalucia

    OpenAIRE

    Garrués Irurzun, Joseán; Rubio Mondéjar, Juan; Hernández Armenteros, Salvador

    2018-01-01

    In recent years there has been renewed interest toward the entrepreneur and its role in economic development. In the case of Spain, but especially in the less developed regions such as Andalusia, the entrepreneur has been identified as responsible for economic backwardness. This paper is an approach to the long-term study of Andalusian entrepreneurship. We have applied the methodology of social network analysis to the documentation of incorporation contained in official records between the ye...

  5. Irrigation network design and reconstruction and its analysis by simulation model

    Directory of Open Access Journals (Sweden)

    Čistý Milan

    2014-06-01

    Full Text Available There are many problems related to pipe network rehabilitation, the main one being how to provide an increase in the hydraulic capacity of a system. Because of its complexity the conventional optimizations techniques are poorly suited for solving this task. In recent years some successful attempts to apply modern heuristic methods to this problem have been published. The main part of the paper deals with applying such technique, namely the harmony search methodology, to network rehabilitation optimization considering both technical and economic aspects of the problem. A case study of the sprinkler irrigation system is presented in detail. Two alternatives of the rehabilitation design are compared. The modified linear programming method is used first with new diameters proposed in the existing network so it could satisfy the increased demand conditions with the unchanged topology. This solution is contrasted to the looped one obtained using a harmony search algorithm

  6. Social Networks, Engagement and Resilience in University Students

    Directory of Open Access Journals (Sweden)

    Elena Fernández-Martínez

    2017-12-01

    Full Text Available Analysis of social networks may be a useful tool for understanding the relationship between resilience and engagement, and this could be applied to educational methodologies, not only to improve academic performance, but also to create emotionally sustainable networks. This descriptive study was carried out on 134 university students. We collected the network structural variables, degree of resilience (CD-RISC 10, and engagement (UWES-S. The computer programs used were excel, UCINET for network analysis, and SPSS for statistical analysis. The analysis revealed results of means of 28.61 for resilience, 2.98 for absorption, 4.82 for dedication, and 3.13 for vigour. The students had two preferred places for sharing information: the classroom and WhatsApp. The greater the value for engagement, the greater the degree of centrality in the friendship network among students who are beginning their university studies. This relationship becomes reversed as the students move to later academic years. In terms of resilience, the highest values correspond to greater centrality in the friendship networks. The variables of engagement and resilience influenced the university students’ support networks.

  7. Social Networks, Engagement and Resilience in University Students.

    Science.gov (United States)

    Fernández-Martínez, Elena; Andina-Díaz, Elena; Fernández-Peña, Rosario; García-López, Rosa; Fulgueiras-Carril, Iván; Liébana-Presa, Cristina

    2017-12-01

    Analysis of social networks may be a useful tool for understanding the relationship between resilience and engagement, and this could be applied to educational methodologies, not only to improve academic performance, but also to create emotionally sustainable networks. This descriptive study was carried out on 134 university students. We collected the network structural variables, degree of resilience (CD-RISC 10), and engagement (UWES-S). The computer programs used were excel, UCINET for network analysis, and SPSS for statistical analysis. The analysis revealed results of means of 28.61 for resilience, 2.98 for absorption, 4.82 for dedication, and 3.13 for vigour. The students had two preferred places for sharing information: the classroom and WhatsApp. The greater the value for engagement, the greater the degree of centrality in the friendship network among students who are beginning their university studies. This relationship becomes reversed as the students move to later academic years. In terms of resilience, the highest values correspond to greater centrality in the friendship networks. The variables of engagement and resilience influenced the university students' support networks.

  8. Methodology to characterize a residential building stock using a bottom-up approach: a case study applied to Belgium

    Directory of Open Access Journals (Sweden)

    Samuel Gendebien

    2014-06-01

    Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.

  9. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  10. Default cascades in complex networks: topology and systemic risk.

    Science.gov (United States)

    Roukny, Tarik; Bersini, Hugues; Pirotte, Hugues; Caldarelli, Guido; Battiston, Stefano

    2013-09-26

    The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only--but substantially--when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011.

  11. Characterization of interstitial lung disease in chest radiographs using SOM artificial neural network

    International Nuclear Information System (INIS)

    Azevedo-Marques, P.M. de; Ambrosio, P.E.; Pereira, R.R. Jr.; Valini, R. de A.; Salomao, S.C.

    2007-01-01

    This paper presents an automated approach to apply a self-organizing map (SOM) artificial neural network (ANN) as a tool for feature extraction and dimensionality reduction to recognize and characterize radiologic patterns of interstitial lung diseases in chest radiography. After feature extraction and dimensionality reduction a multilayer perceptron (MLP) ANN is applied for radiologic patterns classification in normal, linear, nodular or mixed. A leave-one-out methodology was applied for training and test over a database containing 17 samples of linear pattern, 9 samples of nodular pattern, 9 samples of mixed pattern and 18 samples of normal pattern. The MLP network provided an average result of 88.7% of right classification, with 100% of right classification for linear pattern, 55.5% for nodular pattern, 77.7% for mixed pattern and 100% for normal pattern. (orig.)

  12. Seismic Hazard Analysis on a Complex, Interconnected Fault Network

    Science.gov (United States)

    Page, M. T.; Field, E. H.; Milner, K. R.

    2017-12-01

    In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.

  13. On Research Methodology in Applied Linguistics in 2002-2008

    Science.gov (United States)

    Martynychev, Andrey

    2010-01-01

    This dissertation examined the status of data-based research in applied linguistics through an analysis of published research studies in nine peer-reviewed applied linguistics journals ("Applied Language Learning, The Canadian Modern Language Review / La Revue canadienne des langues vivantes, Current Issues in Language Planning, Dialog on Language…

  14. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    Science.gov (United States)

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  15. Review Essay: Does Qualitative Network Analysis Exist?

    Directory of Open Access Journals (Sweden)

    Rainer Diaz-Bone

    2007-01-01

    Full Text Available Social network analysis was formed and established in the 1970s as a way of analyzing systems of social relations. In this review the theoretical-methodological standpoint of social network analysis ("structural analysis" is introduced and the different forms of social network analysis are presented. Structural analysis argues that social actors and social relations are embedded in social networks, meaning that action and perception of actors as well as the performance of social relations are influenced by the network structure. Since the 1990s structural analysis has integrated concepts such as agency, discourse and symbolic orientation and in this way structural analysis has opened itself. Since then there has been increasing use of qualitative methods in network analysis. They are used to include the perspective of the analyzed actors, to explore networks, and to understand network dynamics. In the reviewed book, edited by Betina HOLLSTEIN and Florian STRAUS, the twenty predominantly empirically orientated contributions demonstrate the possibilities of combining quantitative and qualitative methods in network analyses in different research fields. In this review we examine how the contributions succeed in applying and developing the structural analysis perspective, and the self-positioning of "qualitative network analysis" is evaluated. URN: urn:nbn:de:0114-fqs0701287

  16. Neural networks applied to inverters control; Les reseaux de neurones appliques a la commande des convertisseurs

    Energy Technology Data Exchange (ETDEWEB)

    Jammes, B; Marpinard, J C

    1996-12-31

    Neural networks are scarcely applied to power electronics. This attempt includes two different topics: optimal control and computerized simulation. The learning has been performed through output error feedback. For implementation, a buck converter has been used as a voltage pulse generator. (D.L.) 7 refs.

  17. Perception of Communication Network Fraud Dynamics by Network ...

    African Journals Online (AJOL)

    In considering the implications of the varied nature of the potential targets, the paper identifies the view to develop effective intelligence analysis methodologies for network fraud detection and prevention by network administrators and stakeholders. The paper further notes that organizations are fighting an increasingly ...

  18. A methodological framework applied to the choice of the best method in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2009-01-01

    The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)

  19. Systemic design methodologies for electrical energy systems analysis, synthesis and management

    CERN Document Server

    Roboam, Xavier

    2012-01-01

    This book proposes systemic design methodologies applied to electrical energy systems, in particular analysis and system management, modeling and sizing tools. It includes 8 chapters: after an introduction to the systemic approach (history, basics & fundamental issues, index terms) for designing energy systems, this book presents two different graphical formalisms especially dedicated to multidisciplinary devices modeling, synthesis and analysis: Bond Graph and COG/EMR. Other systemic analysis approaches for quality and stability of systems, as well as for safety and robustness analysis tools are also proposed. One chapter is dedicated to energy management and another is focused on Monte Carlo algorithms for electrical systems and networks sizing. The aim of this book is to summarize design methodologies based in particular on a systemic viewpoint, by considering the system as a whole. These methods and tools are proposed by the most important French research laboratories, which have many scientific partn...

  20. Exploring Peer Relationships, Friendships and Group Work Dynamics in Higher Education: Applying Social Network Analysis

    Science.gov (United States)

    Mamas, Christoforos

    2018-01-01

    This study primarily applied social network analysis (SNA) to explore the relationship between friendships, peer social interactions and group work dynamics within a higher education undergraduate programme in England. A critical case study design was adopted so as to allow for an in-depth exploration of the students' voice. In doing so, the views…

  1. Least squares methodology applied to LWR-PV damage dosimetry, experience and expectations

    International Nuclear Information System (INIS)

    Wagschal, J.J.; Broadhead, B.L.; Maerker, R.E.

    1979-01-01

    The development of an advanced methodology for Light Water Reactors (LWR) Pressure Vessel (PV) damage dosimetry applications is the subject of an ongoing EPRI-sponsored research project at ORNL. This methodology includes a generalized least squares approach to a combination of data. The data include measured foil activations, evaluated cross sections and calculated fluxes. The uncertainties associated with the data as well as with the calculational methods are an essential component of this methodology. Activation measurements in two NBS benchmark neutron fields ( 252 Cf ISNF) and in a prototypic reactor field (Oak Ridge Pool Critical Assembly - PCA) are being analyzed using a generalized least squares method. The sensitivity of the results to the representation of the uncertainties (covariances) was carefully checked. Cross element covariances were found to be of utmost importance

  2. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  3. Artificial neural networks applied to forecasting time series.

    Science.gov (United States)

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  4. Assessing harmonic current source modelling and power definitions in balanced and unbalanced networks

    Energy Technology Data Exchange (ETDEWEB)

    Atkinson-Hope, Gary; Stemmet, W.C. [Cape Peninsula University of Technology, Cape Town Campus, Cape Town (South Africa)

    2006-07-01

    The purpose of this paper is to assess the DlgSILENT PowerFactory software power definitions (indices) in terms of phase and sequence components for balanced and unbalanced networks when harmonic distortion is present and to compare its results to hand calculations done, following recommendation made by the IEEE Working Group on this topic. This paper also includes the development of a flowchart for calculating power indices in balanced and unbalanced three-phase networks when non-sinusoidal voltages and currents are present. A further purpose is to determine how two industrial grade harmonic analysis software packages (DlgSILENT and ERACS) model three-phase harmonic sources used for current penetration studies and to compare their results when applied to a network. From the investigations, another objective was to develop a methodology for modelling harmonic current sources based on a spectrum obtained from measurements. Three case studies were conducted and the assessment and developed methodologies were shown to be effective. (Author)

  5. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E

    2015-01-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3 He-filled tube is obtained by using this methodology with respect to previous calibration methods. (paper)

  6. An approach for optimally extending mathematical models of signaling networks using omics data.

    Science.gov (United States)

    Bianconi, Fortunato; Patiti, Federico; Baldelli, Elisa; Crino, Lucio; Valigi, Paolo

    2015-01-01

    Mathematical modeling is a key process in Systems Biology and the use of computational tools such as Cytoscape for omics data processing, need to be integrated in the modeling activity. In this paper we propose a new methodology for modeling signaling networks by combining ordinary differential equation models and a gene recommender system, GeneMANIA. We started from existing models, that are stored in the BioModels database, and we generated a query to use as input for the GeneMANIA algorithm. The output of the recommender system was then led back to the kinetic reactions that were finally added to the starting model. We applied the proposed methodology to EGFR-IGF1R signal transduction network, which plays an important role in translational oncology and cancer therapy of non small cell lung cancer.

  7. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  8. A systematic review of methodology applied during preclinical anesthetic neurotoxicity studies: important issues and lessons relevant to the design of future clinical research.

    Science.gov (United States)

    Disma, Nicola; Mondardini, Maria C; Terrando, Niccolò; Absalom, Anthony R; Bilotta, Federico

    2016-01-01

    Preclinical evidence suggests that anesthetic agents harm the developing brain thereby causing long-term neurocognitive impairments. It is not clear if these findings apply to humans, and retrospective epidemiological studies thus far have failed to show definitive evidence that anesthetic agents are harmful to the developing human brain. The aim of this systematic review was to summarize the preclinical studies published over the past decade, with a focus on methodological issues, to facilitate the comparison between different preclinical studies and inform better design of future trials. The literature search identified 941 articles related to the topic of neurotoxicity. As the primary aim of this systematic review was to compare methodologies applied in animal studies to inform future trials, we excluded a priori all articles focused on putative mechanism of neurotoxicity and the neuroprotective agents. Forty-seven preclinical studies were finally included in this review. Methods used in these studies were highly heterogeneous-animals were exposed to anesthetic agents at different developmental stages, in various doses and in various combinations with other drugs, and overall showed diverse toxicity profiles. Physiological monitoring and maintenance of physiological homeostasis was variable and the use of cognitive tests was generally limited to assessment of specific brain areas, with restricted translational relevance to humans. Comparison between studies is thus complicated by this heterogeneous methodology and the relevance of the combined body of literature to humans remains uncertain. Future preclinical studies should use better standardized methodologies to facilitate transferability of findings from preclinical into clinical science. © 2015 John Wiley & Sons Ltd.

  9. An Applied Study of Implementation of the Advanced Decommissioning Costing Methodology for Intermediate Storage Facility for Spent Fuel in Studsvik, Sweden with special emphasis to the application of the Omega code

    Energy Technology Data Exchange (ETDEWEB)

    Kristofova, Kristina; Vasko, Marek; Daniska, Vladimir; Ondra, Frantisek; Bezak, Peter [DECOM Slovakia, spol. s.r.o., J. Bottu 2, SK-917 01 Trnava (Slovakia); Lindskog, Staffan [Swedish Nuclear Power Inspectorate, Stockholm (Sweden)

    2007-01-15

    The presented study is focused on an analysis of decommissioning costs for the Intermediate Storage Facility for Spent Fuel (FA) facility in Studsvik prepared by SVAFO and a proposal of the advanced decommissioning costing methodology application. Therefore, this applied study concentrates particularly in the following areas: 1. Analysis of FA facility cost estimates prepared by SVAFO including description of FA facility in Studsvik, summarised input data, applied cost estimates methodology and summarised results from SVAFO study. 2. Discussion of results of the SVAFO analysis, proposals for enhanced cost estimating methodology and upgraded structure of inputs/outputs for decommissioning study for FA facility. 3. Review of costing methodologies with the special emphasis on the advanced costing methodology and cost calculation code OMEGA. 4. Discussion on implementation of the advanced costing methodology for FA facility in Studsvik together with: - identification of areas of implementation; - analyses of local decommissioning infrastructure; - adaptation of the data for the calculation database; - inventory database; and - implementation of the style of work with the computer code OMEGA.

  10. An Applied Study of Implementation of the Advanced Decommissioning Costing Methodology for Intermediate Storage Facility for Spent Fuel in Studsvik, Sweden with special emphasis to the application of the Omega code

    International Nuclear Information System (INIS)

    Kristofova, Kristina; Vasko, Marek; Daniska, Vladimir; Ondra, Frantisek; Bezak, Peter; Lindskog, Staffan

    2007-01-01

    The presented study is focused on an analysis of decommissioning costs for the Intermediate Storage Facility for Spent Fuel (FA) facility in Studsvik prepared by SVAFO and a proposal of the advanced decommissioning costing methodology application. Therefore, this applied study concentrates particularly in the following areas: 1. Analysis of FA facility cost estimates prepared by SVAFO including description of FA facility in Studsvik, summarised input data, applied cost estimates methodology and summarised results from SVAFO study. 2. Discussion of results of the SVAFO analysis, proposals for enhanced cost estimating methodology and upgraded structure of inputs/outputs for decommissioning study for FA facility. 3. Review of costing methodologies with the special emphasis on the advanced costing methodology and cost calculation code OMEGA. 4. Discussion on implementation of the advanced costing methodology for FA facility in Studsvik together with: - identification of areas of implementation; - analyses of local decommissioning infrastructure; - adaptation of the data for the calculation database; - inventory database; and - implementation of the style of work with the computer code OMEGA

  11. A methodology for Electric Power Load Forecasting

    Directory of Open Access Journals (Sweden)

    Eisa Almeshaiei

    2011-06-01

    Full Text Available Electricity demand forecasting is a central and integral process for planning periodical operations and facility expansion in the electricity sector. Demand pattern is almost very complex due to the deregulation of energy markets. Therefore, finding an appropriate forecasting model for a specific electricity network is not an easy task. Although many forecasting methods were developed, none can be generalized for all demand patterns. Therefore, this paper presents a pragmatic methodology that can be used as a guide to construct Electric Power Load Forecasting models. This methodology is mainly based on decomposition and segmentation of the load time series. Several statistical analyses are involved to study the load features and forecasting precision such as moving average and probability plots of load noise. Real daily load data from Kuwaiti electric network are used as a case study. Some results are reported to guide forecasting future needs of this network.

  12. A methodology for a quantitative assessment of safety culture in NPPs based on Bayesian networks

    International Nuclear Information System (INIS)

    Kim, Young Gab; Lee, Seung Min; Seong, Poong Hyun

    2017-01-01

    Highlights: • A safety culture framework and a quantitative methodology to assess safety culture were proposed. • The relation among Norm system, Safety Management System and worker's awareness was established. • Safety culture probability at NPPs was updated by collecting actual organizational data. • Vulnerable areas and the relationship between safety culture and human error were confirmed. - Abstract: For a long time, safety has been recognized as a top priority in high-reliability industries such as aviation and nuclear power plants (NPPs). Establishing a safety culture requires a number of actions to enhance safety, one of which is changing the safety culture awareness of workers. The concept of safety culture in the nuclear power domain was established in the International Atomic Energy Agency (IAEA) safety series, wherein the importance of employee attitudes for maintaining organizational safety was emphasized. Safety culture assessment is a critical step in the process of enhancing safety culture. In this respect, assessment is focused on measuring the level of safety culture in an organization, and improving any weakness in the organization. However, many continue to think that the concept of safety culture is abstract and unclear. In addition, the results of safety culture assessments are mostly subjective and qualitative. Given the current situation, this paper suggests a quantitative methodology for safety culture assessments based on a Bayesian network. A proposed safety culture framework for NPPs would include the following: (1) a norm system, (2) a safety management system, (3) safety culture awareness of worker, and (4) Worker behavior. The level of safety culture awareness of workers at NPPs was reasoned through the proposed methodology. Then, areas of the organization that were vulnerable in terms of safety culture were derived by analyzing observational evidence. We also confirmed that the frequency of events involving human error

  13. Validation and quantification of uncertainty in coupled climate models using network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bracco, Annalisa [Georgia Inst. of Technology, Atlanta, GA (United States)

    2015-08-10

    We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets

  14. Implications of applying methodological shortcuts to expedite systematic reviews: three case studies using systematic reviews from agri-food public health.

    Science.gov (United States)

    Pham, Mai T; Waddell, Lisa; Rajić, Andrijana; Sargeant, Jan M; Papadopoulos, Andrew; McEwen, Scott A

    2016-12-01

    The rapid review is an approach to synthesizing research evidence when a shorter timeframe is required. The implications of what is lost in terms of rigour, increased bias and accuracy when conducting a rapid review have not yet been elucidated. We assessed the potential implications of methodological shortcuts on the outcomes of three completed systematic reviews addressing agri-food public health topics. For each review, shortcuts were applied individually to assess the impact on the number of relevant studies included and whether omitted studies affected the direction, magnitude or precision of summary estimates from meta-analyses. In most instances, the shortcuts resulted in at least one relevant study being omitted from the review. The omission of studies affected 39 of 143 possible meta-analyses, of which 14 were no longer possible because of insufficient studies (studies generally resulted in less precise pooled estimates (i.e. wider confidence intervals) that did not differ in direction from the original estimate. The three case studies demonstrated the risk of missing relevant literature and its impact on summary estimates when methodological shortcuts are applied in rapid reviews. © 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.

  15. Complex Network Theory Applied to the Growth of Kuala Lumpur's Public Urban Rail Transit Network.

    Science.gov (United States)

    Ding, Rui; Ujang, Norsidah; Hamid, Hussain Bin; Wu, Jianjun

    2015-01-01

    Recently, the number of studies involving complex network applications in transportation has increased steadily as scholars from various fields analyze traffic networks. Nonetheless, research on rail network growth is relatively rare. This research examines the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL) based on complex network theory and covers both the topological structure of the rail system and future trends in network growth. In addition, network performance when facing different attack strategies is also assessed. Three topological network characteristics are considered: connections, clustering and centrality. In PURTNoKL, we found that the total number of nodes and edges exhibit a linear relationship and that the average degree stays within the interval [2.0488, 2.6774] with heavy-tailed distributions. The evolutionary process shows that the cumulative probability distribution (CPD) of degree and the average shortest path length show good fit with exponential distribution and normal distribution, respectively. Moreover, PURTNoKL exhibits clear cluster characteristics; most of the nodes have a 2-core value, and the CPDs of the centrality's closeness and betweenness follow a normal distribution function and an exponential distribution, respectively. Finally, we discuss four different types of network growth styles and the line extension process, which reveal that the rail network's growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those nodes with the largest degrees and the highest betweenness values. This research may enhance the networkability of the rail system and better shape the future growth of public rail networks.

  16. Selection methodology for LWR safety programs and proposals. Volume 2. Methodology application

    International Nuclear Information System (INIS)

    Ritzman, R.L.; Husseiny, A.A.

    1980-08-01

    The results of work done to update and apply a methodology for selecting (prioritizing) LWR safety technology R and D programs are described. The methodology is based on multiattribute utility (MAU) theory. Application of the methodology to rank-order a group of specific R and D programs included development of a complete set of attribute utility functions, specification of individual attribute scaling constants, and refinement and use of an interactive computer program (MAUP) to process decision-maker inputs and generate overall (multiattribute) program utility values. The output results from several decision-makers are examined for consistency and conclusions and recommendations regarding general use of the methodology are presented. 3 figures, 18 tables

  17. Creating, generating and comparing random network models with NetworkRandomizer.

    Science.gov (United States)

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  18. Network analysis as a tool for assessing environmental sustainability: applying the ecosystem perspective to a Danish water management system

    DEFF Research Database (Denmark)

    Pizzol, Massimo; Scotti, Marco; Thomsen, Marianne

    2013-01-01

    New insights into the sustainable use of natural resources in human systems can be gained through comparison with ecosystems via common indices. In both kinds of system, resources are processed by a number of users within a network, but we consider ecosystems as the only ones displaying sustainable...... patterns of growth and development. We applied Network Analysis (NA) for assessing the sustainability of a Danish municipal Water Management System (WMS). We identified water users within the WMS and represented their interactions as a network of water flows. We computed intensive and extensive indices...

  19. Interpreting social network metrics in healthcare organisations: a review and guide to validating small networks.

    Science.gov (United States)

    Dunn, Adam G; Westbrook, Johanna I

    2011-04-01

    Social network analysis is an increasingly popular sociological method used to describe and understand the social aspects of communication patterns in the health care sector. The networks studied in this area are special because they are small, and for these sizes, the metrics calculated during analysis are sensitive to the number of people in the network and the density of observed communication. Validation is of particular value in controlling for these factors and in assisting in the accurate interpretation of network findings, yet such approaches are rarely applied. Our aim in this paper was to bring together published case studies to demonstrate how a proposed validation technique provides a basis for standardised comparison of networks within and across studies. A validation is performed for three network studies comprising ten networks, where the results are compared within and across the studies in relation to a standard baseline. The results confirm that hierarchy, centralisation and clustering metrics are highly sensitive to changes in size or density. Amongst the three case studies, we found support for some conclusions and contrary evidence for others. This validation approach is a tool for identifying additional features and verifying the conclusions reached in observational studies of small networks. We provide a methodological basis from which to perform intra-study and inter-study comparisons, for the purpose of introducing greater rigour to the use of social network analysis in health care applications. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Implementing the flipped classroom methodology to the subject "Applied computing" of the chemical engineering degree at the University of Barcelona

    Directory of Open Access Journals (Sweden)

    Montserrat Iborra

    2017-06-01

    Full Text Available This work is focus on implementation, development, documentation, analysis and assessment of flipped classroom methodology, by means of just in time teaching strategy, in a pilot group (1 of 6 of the subject “Applied Computing” of Chemical Engineering Undergraduate Degree of the University of Barcelona. The results show that this technique promotes self-learning, autonomy, time management as well as an increase in the effectiveness of classroom hours.

  1. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect

    DEFF Research Database (Denmark)

    Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik

    2016-01-01

    One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class......, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators...... and values of different stakeholders (i.e. institutions, educators, learners, and external agents), which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators...

  2. Fair Optimization and Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Wlodzimierz Ogryczak

    2014-01-01

    Full Text Available Optimization models related to designing and operating complex systems are mainly focused on some efficiency metrics such as response time, queue length, throughput, and cost. However, in systems which serve many entities there is also a need for respecting fairness: each system entity ought to be provided with an adequate share of the system’s services. Still, due to system operations-dependant constraints, fair treatment of the entities does not directly imply that each of them is assigned equal amount of the services. That leads to concepts of fair optimization expressed by the equitable models that represent inequality averse optimization rather than strict inequality minimization; a particular widely applied example of that concept is the so-called lexicographic maximin optimization (max-min fairness. The fair optimization methodology delivers a variety of techniques to generate fair and efficient solutions. This paper reviews fair optimization models and methods applied to systems that are based on some kind of network of connections and dependencies, especially, fair optimization methods for the location problems and for the resource allocation problems in communication networks.

  3. Complex Network Theory Applied to the Growth of Kuala Lumpur's Public Urban Rail Transit Network.

    Directory of Open Access Journals (Sweden)

    Rui Ding

    Full Text Available Recently, the number of studies involving complex network applications in transportation has increased steadily as scholars from various fields analyze traffic networks. Nonetheless, research on rail network growth is relatively rare. This research examines the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL based on complex network theory and covers both the topological structure of the rail system and future trends in network growth. In addition, network performance when facing different attack strategies is also assessed. Three topological network characteristics are considered: connections, clustering and centrality. In PURTNoKL, we found that the total number of nodes and edges exhibit a linear relationship and that the average degree stays within the interval [2.0488, 2.6774] with heavy-tailed distributions. The evolutionary process shows that the cumulative probability distribution (CPD of degree and the average shortest path length show good fit with exponential distribution and normal distribution, respectively. Moreover, PURTNoKL exhibits clear cluster characteristics; most of the nodes have a 2-core value, and the CPDs of the centrality's closeness and betweenness follow a normal distribution function and an exponential distribution, respectively. Finally, we discuss four different types of network growth styles and the line extension process, which reveal that the rail network's growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those nodes with the largest degrees and the highest betweenness values. This research may enhance the networkability of the rail system and better shape the future growth of public rail networks.

  4. Networking - concepts and dimensions of the cross-cultural comparative study

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    2000-01-01

    The litarature about networking is weak regarding precise definitions of networking and methodologies to analyse how networking is carried out in practice. Besides, the croos cultural dimension is often underestimated.......The litarature about networking is weak regarding precise definitions of networking and methodologies to analyse how networking is carried out in practice. Besides, the croos cultural dimension is often underestimated....

  5. Study of co-authorship in the nets and the interdisciplinarity in the scientific production on the basis of social network analysis methods: evaluation of the posgraduation program in information science - PPGCI / UFMG

    Directory of Open Access Journals (Sweden)

    Antonio Braz de Oliveira e Silva

    2006-07-01

    Full Text Available This paper discusses the Social Network Analysis (SNA as a method to be broadly applied in researches in the Information Scienc (IS field. This science is, normally, presented as an interdisciplinary filed, but the reseaches lines conducted in Brazil have differents relationship with other disciplines, and doing so, dis-similars interdisciplinaries characteristics. The analysis of the co-authorship network of the professors of the PPGCI/UFMG emphasizes both, the strenght of the methodology and the characteristics of the colaboration in the IS. The article gives an overview of the theoretical basis of the SNA, and presents studies about subjects related to the Information Science field that are done applying SNA, mainly the coauthorship network analysis. Finally, the methodological approach of this research and the main results are presented.

  6. A new methodology for establishing a system for cross-border transmission tariffication in the internal electricity market

    International Nuclear Information System (INIS)

    Glavitsch, H.; Andersson, G.

    2001-01-01

    Several organisations are working on a scheme for cross-border tariffication as the so-called Florence forum indicates. So far, a provisional concept created by ETSO (European Transmission System Operators) evolved which is oriented towards covering costs but is not quite cost-reflective and does not produce economic signals for the market players. In the present project a flow-oriented model and a corresponding methodology have been developed which derive compensations within super nodes standing for aggregated networks of the countries along transit and domestic paths. Specific fees are derived from overall network costs but may be applied in a flexible way to represent the realistic usage of the horizontal network for transits and domestic supply. Charging of costs can be oriented towards consumers or generators. A combination of shares of costs originally determined for consumers and generators is also possible. In such a way the model is flexible to fulfill the requirements of regulators, operators and the European Commission. Measured flow data of the UCTE network have been provided to check the concept in various directions, i.e. based on different parameters such as uniform and individual postage stamps, compensations for transits only and more elaborate networks of super nodes. The concept is also able to cope with circular flows within the real UCTE network. The methodology is suited for an application in a decentralised fashion as the transmission system operator needs to communicate with its neighboring operator only, i.e. there is no need for a centralised clearing office. (author)

  7. Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial

    Directory of Open Access Journals (Sweden)

    Merima Kulin

    2016-06-01

    Full Text Available Data science or “data-driven research” is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i clarifies when, why and how to use data science in wireless network research; (ii provides a generic framework for applying data science in wireless networks; (iii gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.

  8. Development of design methodology for communication network in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Dong Hoon; Seong, Seung Hwan; Jang Gwi Sook; Koo, In Soo; Lee Soon Sung.

    1996-06-01

    This report describe the design methodology of communication network (CN) in nuclear power plants (NPPs). The construction procedure for the NPP CN consists of 4 phases, in study and review phase, design concepts and goals are established through technical review, collection of background information and feasibility study. In design phase, all of design activities such as extraction of requirements, communication modelling, overall and detail architecture design are performed. Implementation and test phase includes the manufacturing, installation and testing of hardware and software. In operation phase, CN construction is finalized through the evaluation and correction during operation. The requirements of CN consist of general requirements such as function, structure, reliability, standardization and detail requirements related with protocol, media, error, performance and etc. CN design also should follow the safety-related requirements such as isolation, redundancy, reliability and verify these requirements. For the selection of each technical element form commercial CN, the evaluation and selection elements are extracted from reliability, performance, operating factors and the required-level which classified into essential, primary, preference, recommendation should be assigned to each element. This report will be used as a technical reference for the CN implementation in NPP. (author). 3 tabs., 5 figs., 25 refs

  9. Heroin assisted treatment and research networks

    DEFF Research Database (Denmark)

    Houborg, Esben; Munksgaard, Rasmus

    2015-01-01

    Purpose – The purpose of this paper is to map research communities related to heroin-assisted treatment (HAT) and the scientific network they are part of to determine their structure and content. Design/methodology/approach – Co-authorship as the basis for conducting social network analysis....... In total, 11 research communities were constructed with different scientific content. HAT research communities are closely connected to medical, psychiatric, and epidemiological research and very loosely connected to social research. Originality/value – The first mapping of the collaborative network HAT...... researchers using social network methodology...

  10. Application of Electre III and DEA methods in the BPR of a bank branch network

    Directory of Open Access Journals (Sweden)

    Damaskos Xenofon

    2005-01-01

    Full Text Available Operational research methodologies are a powerful tool assisting managers in their effort to critically review business data and decide on future business actions. This paper presents the application of Electre multi-criteria methodology and Data Envelopment Analysis, as a part of a small commercial bank's ongoing effort to reengineer its branch network. We focus on two particular problems: first, categorization of the branches so as to apply adequate equivalent organizational schemas and second the assessment of relative efficiency of human resources.

  11. A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines.

    Science.gov (United States)

    Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea

    2017-07-18

    The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.

  12. Networks, complexity and internet regulation scale-free law

    OpenAIRE

    Guadamuz, Andres

    2013-01-01

    This book, then, starts with a general statement: that regulators should try, wherever possible, to use the physical methodological tools presently available in order to draft better legislation. While such an assertion may be applied to the law in general, this work will concentrate on the much narrower area of Internet regulation and the science of complex networks The Internet is the subject of this book not only because it is my main area of research, but also because –without...

  13. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    Science.gov (United States)

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  14. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  15. A neural network approach to local downscaling of GCM output for assessing wind power implications of climate change

    International Nuclear Information System (INIS)

    Sailor, D.J.; Hu, T.; Li, X.; Rosen, J.N.

    2000-01-01

    A methodology is presented for downscaling General Circulation Model (GCM) output to predict surface wind speeds at scales of interest in the wind power industry under expected future climatic conditions. The approach involves a combination of Neural Network tools and traditional weather forecasting techniques. A Neural Network transfer function is developed to relate local wind speed observations to large scale GCM predictions of atmospheric properties under current climatic conditions. By assuming the invariability of this transfer function under conditions of doubled atmospheric carbon dioxide, the resulting transfer function is then applied to GCM output for a transient run of the National Center for Atmospheric Research coupled ocean-atmosphere GCM. This methodology is applied to three test sites in regions relevant to the wind power industry - one in Texas and two in California. Changes in daily mean wind speeds at each location are presented and discussed with respect to potential implications for wind power generation. (author)

  16. ILUC mitigation case studies Tanzania. Applying the Low Indirect Impact Biofuel (LIIB) Methodology to Tanzanian projects

    Energy Technology Data Exchange (ETDEWEB)

    Van de Staaij, J.; Spoettle, M.; Weddige, U.; Toop, G. [Ecofys, Utrecht (Netherlands)

    2012-10-15

    NL Agency is supporting WWF and the Secretariat of the Roundtable on Sustainable Biofuels (RSB) with the development of a certification module for biofuels with a low risk of indirect land use change (ILUC), the Low Indirect Impact Biofuel (LIIB) methodology (www.LIIBmethodology.org). The LIIB methodology was developed to certify that biomass feedstock for biofuels has been produced with a low risk of indirect impacts. It is designed as an independent module that can be added to biofuel policies and existing certification systems for sustainable biofuel and/or feedstock production, such as the RSB Standard, RSPO or NTA8080. It presents detailed ILUC mitigation approaches for four different solution types field-tested and audited in international pilots. Within the Global Sustainable Biomass programme and the Sustainable Biomass Import programme, coordinated by NL Agency, three projects are working on sustainable jatropha in Tanzania. Ecofys has been commissioned by NL Agency to contribute to the further development of the LIIB methodology by applying it to these three jatropha projects in Tanzania. All three projects located in the North of Tanzania, address sustainability in one way or another, but focus on the direct effects of jatropha cultivation and use. Interestingly, they nevertheless seem to apply different methods that could also minimise negative indirect impacts, including ILUC. Bioenergy feedstock production can have unintended consequences well outside the boundary of production operations. These are indirect impacts, which cannot be directly attributed to a particular operation. The most cited indirect impacts are ILUC and food/feed commodity price increases (an indirect impact on food security). ILUC can occur when existing cropland is used to cover the feedstock demand of additional biofuel production. When this displaces the previous use of the land (e.g. food production) this can lead to expansion of land use to new areas (e.g. deforestation) when

  17. Application of network methods for understanding evolutionary dynamics in discrete habitats.

    Science.gov (United States)

    Greenbaum, Gili; Fefferman, Nina H

    2017-06-01

    In populations occupying discrete habitat patches, gene flow between habitat patches may form an intricate population structure. In such structures, the evolutionary dynamics resulting from interaction of gene-flow patterns with other evolutionary forces may be exceedingly complex. Several models describing gene flow between discrete habitat patches have been presented in the population-genetics literature; however, these models have usually addressed relatively simple settings of habitable patches and have stopped short of providing general methodologies for addressing nontrivial gene-flow patterns. In the last decades, network theory - a branch of discrete mathematics concerned with complex interactions between discrete elements - has been applied to address several problems in population genetics by modelling gene flow between habitat patches using networks. Here, we present the idea and concepts of modelling complex gene flows in discrete habitats using networks. Our goal is to raise awareness to existing network theory applications in molecular ecology studies, as well as to outline the current and potential contribution of network methods to the understanding of evolutionary dynamics in discrete habitats. We review the main branches of network theory that have been, or that we believe potentially could be, applied to population genetics and molecular ecology research. We address applications to theoretical modelling and to empirical population-genetic studies, and we highlight future directions for extending the integration of network science with molecular ecology. © 2017 John Wiley & Sons Ltd.

  18. Dynamic event Tress applied to sequences Full Spectrum LOCA. Calculating the frequency of excedeence of damage by integrated Safety Analysis Methodology

    International Nuclear Information System (INIS)

    Gomez-Magan, J. J.; Fernandez, I.; Gil, J.; Marrao, H.; Queral, C.; Gonzalez-Cadelo, J.; Montero-Mayorga, J.; Rivas, J.; Ibane-Llano, C.; Izquierdo, J. M.; Sanchez-Perea, M.; Melendez, E.; Hortal, J.

    2013-01-01

    The Integrated Safety Analysis (ISA) methodology, developed by the Spanish Nuclear Safety council (CSN), has been applied to obtain the dynamic Event Trees (DETs) for full spectrum Loss of Coolant Accidents (LOCAs) of a Westinghouse 3-loop PWR plant. The purpose of this ISA application is to obtain the Damage Excedence Frequency (DEF) for the LOCA Event Tree by taking into account the uncertainties in the break area and the operator actuation time needed to cool down and de pressurize reactor coolant system by means of steam generator. Simulations are performed with SCAIS, a software tool which includes a dynamic coupling with MAAP thermal hydraulic code. The results show the capability of the ISA methodology to obtain the DEF taking into account the time uncertainty in human actions. (Author)

  19. Correlations in the degeneracy of structurally controllable topologies for networks

    Science.gov (United States)

    Campbell, Colin; Aucott, Steven; Ruths, Justin; Ruths, Derek; Shea, Katriona; Albert, Réka

    2017-04-01

    Many dynamic systems display complex emergent phenomena. By directly controlling a subset of system components (nodes) via external intervention it is possible to indirectly control every other component in the system. When the system is linear or can be approximated sufficiently well by a linear model, methods exist to identify the number and connectivity of a minimum set of external inputs (constituting a so-called minimal control topology, or MCT). In general, many MCTs exist for a given network; here we characterize a broad ensemble of empirical networks in terms of the fraction of nodes and edges that are always, sometimes, or never a part of an MCT. We study the relationships between the measures, and apply the methodology to the T-LGL leukemia signaling network as a case study. We show that the properties introduced in this report can be used to predict key components of biological networks, with potentially broad applications to network medicine.

  20. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    Science.gov (United States)

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  1. A spectral approach for the quantitative description of cardiac collagen network from nonlinear optical imaging.

    Science.gov (United States)

    Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia

    2015-01-01

    The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.

  2. Proposition of a modeling and an analysis methodology of integrated reverse logistics chain in the direct chain

    Directory of Open Access Journals (Sweden)

    Faycal Mimouni

    2016-04-01

    Full Text Available Purpose: Propose a modeling and analysis methodology based on the combination of Bayesian networks and Petri networks of the reverse logistics integrated the direct supply chain. Design/methodology/approach: Network modeling by combining Petri and Bayesian network. Findings: Modeling with Bayesian network complimented with Petri network to break the cycle problem in the Bayesian network. Research limitations/implications: Demands are independent from returns. Practical implications: Model can only be used on nonperishable products. Social implications: Legislation aspects: Recycling laws; Protection of environment; Client satisfaction via after sale service. Originality/value: Bayesian network with a cycle combined with the Petri Network.

  3. Nutrients interaction investigation to improve Monascus purpureus FTC5391 growth rate using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Mohamad, R.

    2013-01-01

    Full Text Available Aims: Two vital factors, certain environmental conditions and nutrients as a source of energy are entailed for successful growth and reproduction of microorganisms. Manipulation of nutritional requirement is the simplest and most effectual strategy to stimulate and enhance the activity of microorganisms. Methodology and Results: In this study, response surface methodology (RSM and artificial neural network (ANN were employed to optimize the carbon and nitrogen sources in order to improve growth rate of Monascus purpureus FTC5391,a new local isolate. The best models for optimization of growth rate were a multilayer full feed-forward incremental back propagation network, and a modified response surface model using backward elimination. The optimum condition for cell mass production was: sucrose 2.5%, yeast extract 0.045%, casamino acid 0.275%, sodium nitrate 0.48%, potato starch 0.045%, dextrose 1%, potassium nitrate 0.57%. The experimental cell mass production using this optimal condition was 21 mg/plate/12days, which was 2.2-fold higher than the standard condition (sucrose 5%, yeast extract 0.15%, casamino acid 0.25%, sodium nitrate 0.3%, potato starch 0.2%, dextrose 1%, potassium nitrate 0.3%. Conclusion, significance and impact of study: The results of RSM and ANN showed that all carbon and nitrogen sources tested had significant effect on growth rate (P-value < 0.05. In addition the use of RSM and ANN alongside each other provided a proper growth prediction model.

  4. Artificial neural network methodology: Application to predict magnetic properties of nanocrystalline alloys

    International Nuclear Information System (INIS)

    Hamzaoui, R.; Cherigui, M.; Guessasma, S.; ElKedim, O.; Fenineche, N.

    2009-01-01

    This paper is dedicated to the optimization of magnetic properties of iron based magnetic materials with regard to milling and coating process conditions using artificial neural network methodology. Fe-20 wt.% Ni and Fe-6.5 wt.% Si, alloys were obtained using two high-energy ball milling technologies, namely a planetary ball mill P4 vario ball mill from Fritsch and planetary ball mill from Retch. Further processing of Fe-Si powder allowed the spraying of the feedstock material using high-velocity oxy-fuel (HVOF) process to obtain a relatively dense coating. Input parameters were the disc Ω and vial ω speed rotations for the milling technique, and spray distance and oxygen flow rate in the case of coating process. Two main magnetic parameters are optimized namely the saturation magnetization and the coercivity. Predicted results depict clearly coupled effects of input parameters to vary magnetic parameters. In particular, the increase of saturation magnetization is correlated to the increase of the product Ωω (shock power) and the product of spray parameters. Largest coercivity values are correlated to the increase of the ratio Ω/ω (shock mode process) and the increase of the product of spray parameters.

  5. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    Science.gov (United States)

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  6. Dosimetric methodology of the ICRP

    International Nuclear Information System (INIS)

    Eckerman, K.F.

    1994-01-01

    Establishment of guidance for the protection of workers and members of the public from radiation exposures necessitates estimation of the radiation dose to tissues of the body at risk. The dosimetric methodology formulated by the International Commission on Radiological Protection (ICRP) is intended to be responsive to this need. While developed for radiation protection, elements of the methodology are often applied in addressing other radiation issues; e.g., risk assessment. This chapter provides an overview of the methodology, discusses its recent extension to age-dependent considerations, and illustrates specific aspects of the methodology through a number of numerical examples

  7. SNE's methodological basis - web-based software in entrepreneurial surveys

    DEFF Research Database (Denmark)

    Madsen, Henning

    This overhead based paper gives an introduction to the research methodology applied in the surveys carried out in the SNE-project.......This overhead based paper gives an introduction to the research methodology applied in the surveys carried out in the SNE-project....

  8. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Jin Soo; Heo, Gyun Young [Kyung Hee University, Youngin (Korea, Republic of); Kang, Hyun Gook [KAIST, Dajeon (Korea, Republic of); Son, Han Seong [Joongbu University, Chubu (Korea, Republic of)

    2014-08-15

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility.

  9. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Heo, Gyun Young; Kang, Hyun Gook; Son, Han Seong

    2014-01-01

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility

  10. Parametric optimization for floating drum anaerobic bio-digester using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    S. Sathish

    2016-12-01

    Full Text Available The main purpose of this study to increase the optimal conditions for biogas yield from anaerobic digestion of agricultural waste (Rice Straw using Response Surface Methodology (RSM and Artificial Neural Network (ANN. In the development of predictive models temperature, pH, substrate concentration and agitation time are conceived as model variables. The experimental results show that the liner model terms of temperature, substrate concentration and pH, agitation time have significance of interactive effects (p < 0.05. The results manifest that the optimum process parameters affected on biogas yield increase from the ANN model when compared to RSM model. The ANN model indicates that it is much more accurate and reckons the values of maximum biogas yield when compared to RSM model.

  11. Application of the HGPT methodology of reactor operation problems with a nodal mixed method

    International Nuclear Information System (INIS)

    Baudron, A.M.; Bruna, G.B.; Gandini, A.; Lautard, J.J.; Monti, S.; Pizzigati, G.

    1998-01-01

    The heuristically based generalized perturbation theory (HGPT), to first and higher order, applied to the neutron field of a reactor system, is discussed in relation to quasistatic problems. This methodology is of particular interest in reactor operation. In this application it may allow an on-line appraisal of the main physical responses of the reactor system when subject to alterations relevant to normal system exploitation, e.g. control rod movement, and/or soluble boron concentration changes to be introduced, for instance, for compensating power level variations following electrical network demands. In this paper, after describing the main features of the theory, its implementation into the diffusion, 3D mixed dual nodal code MINOS of the SAPHYR system is presented. The results from a small scale investigation performed on a simplified PWR system corroborate the validity of the methodology proposed

  12. PERSON IN SOCIAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Андрей Борисович Шалимов

    2013-11-01

    Full Text Available Purpose: Our scientific purpose is creation of practical model of person’s representation in social networks (Facebook, Twitter, Classmates. As user of social networks, person is made conditional not only upon its own identity, but also upon the information about himself, which he is ready to share with his friends in contact list. Goal-setting and practical activities for their achievement mean that you should apply force, it can completely eliminates systemic factors, the system of power relations, which overwhelms human being in social networks.Methodology: The reconstruction of the model of human in the popularity of social networksResults: There is descripton of practical model of person's representation in social networks, it includes the management of own identity and the audience (the list of contacts. When person manages own identity, he answers the question, «Whom I can dare to be?». Person perceives himself in social networks' being, he understands himself and his place in the world, he identifies.Managing the way in social media means that you answer the question «What I want to tell?». Person in social media looks at events in the field of culture, economy, politics, social relations through the prism of his own attitudes, he forms and formulates his own agenda and he is going to tell about himself through them.Practical implications: Everyday people’s life, practical activities, including marketing in social networks.DOI: http://dx.doi.org/10.12731/2218-7405-2013-9-51

  13. Classification of data patterns using an autoassociative neural network topology

    Science.gov (United States)

    Dietz, W. E.; Kiech, E. L.; Ali, M.

    1989-01-01

    A diagnostic expert system based on neural networks is developed and applied to the real-time diagnosis of jet and rocket engines. The expert system methodologies are based on the analysis of patterns of behavior of physical mechanisms. In this approach, fault diagnosis is conceptualized as the mapping or association of patterns of sensor data to patterns representing fault conditions. The approach addresses deficiencies inherent in many feedforward neural network models and greatly reduces the number of networks necessary to identify the existence of a fault condition and estimate the duration and severity of the identified fault. The network topology used in the present implementation of the diagnostic system is described, as well as the training regimen used and the response of the system to inputs representing both previously observed and unknown fault scenarios. Noise effects on the integrity of the diagnosis are also evaluated.

  14. Archetype modeling methodology.

    Science.gov (United States)

    Moner, David; Maldonado, José Alberto; Robles, Montserrat

    2018-03-01

    Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Investigating High-tech and Knowledge-Intensive Ventures: Methodological Issues

    DEFF Research Database (Denmark)

    Madsen, Henning; Neergaard, Helle; Christensen, Patrizia V.

    2003-01-01

    The paper presents and discusses the advantages and disadvantages of the methodology applied in a longitudinal project on entrepreneurship, the SNE project.......The paper presents and discusses the advantages and disadvantages of the methodology applied in a longitudinal project on entrepreneurship, the SNE project....

  16. A SYSTEM APPROACH TO ORGANISING PROTECTION FROM TARGETED INFORMATION IN SOCIAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Marina V. Tumbinskaya

    2017-01-01

    Full Text Available Abstract. Objectives The aim of the study is to formalise a generalised algorithm for the distribution of targeted information in social networks, serving as the basis for a methodology for increasing personal information security. Method The research is based on the methodology of protection from unwanted information distributed across social network systems. Results The article presents the formalisation of an algorithm for the distribution of targeted information across social networks: input and output parameters are defined and the algorithm’s internal conditions are described, consisting of parameters for implementing attack scenarios, which variation would allow them to be detailed. A technique for protection from targeted information distributed across social networks is proposed, allowing the level of protection of personal data and information of social networks users to be enhanced, as well as the reliability of information increased. Conclusion The results of the research will help to prevent threats to information security, counteract attacks by intruders who often use methods of competitive intelligence and social engineering through the use of countermeasures. A model for protection against targeted information and implement special software for its integration into online social network social information systems is developed. The system approach will allow external monitoring of events in social networks to be carried out and vulnerabilities identified in the mechanisms of instant messaging, which provide opportunities for attacks by intruders. The results of the research make it possible to apply a network approach to the study of informal communities, which are actively developing today, at a new level. 

  17. Group method of data handling and neral networks applied in monitoring and fault detection in sensors in nuclear power plants; Group Method of Data Handling (GMDH) e Redes Neurais na Monitoracao e Deteccao de Falhas em sensores de centrais nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio

    2011-07-01

    The increasing demand in the complexity, efficiency and reliability in modern industrial systems stimulated studies on control theory applied to the development of Monitoring and Fault Detection system. In this work a new Monitoring and Fault Detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and Artificial Neural Networks (ANNs) which was applied to the IEA-R1 research reactor at IPEN. The Monitoring and Fault Detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second part to the process information using ANNs. The GMDH algorithm was used in two different ways: firstly, the GMDH algorithm was used to generate a better database estimated, called matrix{sub z}, which was used to train the ANNs. After that, the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one Theoretical Model and four Models using different sets of reactor variables. After an exhausting study dedicated to the sensors Monitoring, the Fault Detection in sensors was developed by simulating faults in the sensors database using values of 5%, 10%, 15% and 20% in these sensors database. The results obtained using GMDH algorithm in the choice of the best input variables to the ANNs were better than that using only ANNs, thus making possible the use of these methods in the implementation of a new Monitoring and Fault Detection methodology applied in sensors. (author)

  18. Method of Creation of “Core-Gisseismic Attributes” Dependences With Use of Trainable Neural Networks

    Directory of Open Access Journals (Sweden)

    Gafurov Denis

    2016-01-01

    Full Text Available The study describes methodological techniques and results of geophysical well logging and seismic data interpretation by means of trainable neural networks. Objects of research are wells and seismic materials of Talakan field. The article also presents forecast of construction and reservoir properties of Osa horizon. The paper gives an example of creation of geological (lithological -facial model of the field based on developed methodical techniques of complex interpretation of geologicgeophysical data by trainable neural network. The constructed lithological -facial model allows specifying a geological structure of the field. The developed methodical techniques and the trained neural networks may be applied to adjacent sites for research of carbonate horizons.

  19. Histomorphometric study and three-dimensional reconstruction of the osteocyte lacuno-canalicular network one hour after applying tensile and compressive forces.

    Science.gov (United States)

    Bozal, Carola B; Sánchez, Luciana M; Mandalunis, Patricia M; Ubios, Ángela M

    2013-01-01

    The occurrence of very early morphological changes in the osteocyte lacuno-canalicular network following application of tensile and/or compressive forces remains unknown to date. Thus, the aim of this study was to perform a morphological and morphometric evaluation of the changes in the three-dimensional structure of the lacuno-canalicular network and the osteocyte network of alveolar bone that take place very early after applying tensile and compressive forces in vivo, conducting static histomorphometry on bright-field microscopy and confocal laser scanning microscopy images. Our results showed that both the tensile and compressive forces induced early changes in osteocytes and their lacunae, which manifested as an increase in lacunar volume and changes in lacunar shape and orientation. An increase in canalicular width and a decrease in the width and an increase in the length of cytoplasmic processes were also observed. The morphological changes in the lacuno-canalicular and osteocyte networks that occur in vivo very early after application of tensile and compressive forces would be an indication of an increase in permeability within the system. Thus, both compressive and tensile forces would cause fluid displacement very soon after being applied; the latter would in turn rapidly activate alveolar bone osteocytes, enhancing transmission of the signals to the entire osteocyte network and the effector cells located at the bone surface. Copyright © 2013 S. Karger AG, Basel.

  20. Design Methodologies: Industrial and Educational Applications

    NARCIS (Netherlands)

    Tomiyama, T.; Gul, P.; Jin, Y.; Lutters, Diederick; Kind, Ch.; Kimura, F.

    2009-01-01

    The field of Design Theory and Methodology has a rich collection of research results that has been taught at educational institutions as well as applied to design practices. First, this keynote paper describes some methods to classify them. It then illustrates individual theories and methodologies

  1. Metastability, spectrum, and eigencurrents of the Lennard-Jones-38 network

    International Nuclear Information System (INIS)

    Cameron, Maria K.

    2014-01-01

    We develop computational tools for spectral analysis of stochastic networks representing energy landscapes of atomic and molecular clusters. Physical meaning and some properties of eigenvalues, left and right eigenvectors, and eigencurrents are discussed. We propose an approach to compute a collection of eigenpairs and corresponding eigencurrents describing the most important relaxation processes taking place in the system on its way to the equilibrium. It is suitable for large and complex stochastic networks where pairwise transition rates, given by the Arrhenius law, vary by orders of magnitude. The proposed methodology is applied to the network representing the Lennard-Jones-38 cluster created by Wales's group. Its energy landscape has a double funnel structure with a deep and narrow face-centered cubic funnel and a shallower and wider icosahedral funnel. However, the complete spectrum of the generator matrix of the Lennard-Jones-38 network has no appreciable spectral gap separating the eigenvalue corresponding to the escape from the icosahedral funnel. We provide a detailed description of the escape process from the icosahedral funnel using the eigencurrent and demonstrate a superexponential growth of the corresponding eigenvalue. The proposed spectral approach is compared to the methodology of the Transition Path Theory. Finally, we discuss whether the Lennard-Jones-38 cluster is metastable from the points of view of a mathematician and a chemical physicist, and make a connection with experimental works

  2. Methodology supporting production control in a foundry applying modern DISAMATIC molding line

    Directory of Open Access Journals (Sweden)

    Sika Robert

    2017-01-01

    Full Text Available The paper presents methodology of production control using statistical methods in foundry conditions, using the automatic DISAMATIC molding line. The authors were inspired by many years of experience in implementing IT tools for foundries. The authors noticed that there is a lack of basic IT tools dedicated to specific casting processes, that would greatly facilitate their oversight and thus improve the quality of manufactured products. More and more systems are installed in the ERP or CAx area, but they integrate processes only partially, mainly in the area of technology design and business management from finance and control. Monitoring of foundry processes can generate a large amount of process-related data. This is particularly noticeable in automated processes. An example is the modern DISAMATIC molding line, which integrates several casting processes, such as mold preparation, assembly, pouring or shake out. The authors proposed a methodology that supports the control of the above-mentioned foundry processes using statistical methods. Such an approach can be successfully used, for example, during periodic external audits. The mentioned methodology in the innovative DISAM-ProdC computer tool was implemented.

  3. Services for People Innovation Park – Planning Methodologies

    Directory of Open Access Journals (Sweden)

    Maria Angela Campelo de Melo

    2013-04-01

    Full Text Available This article aims to identify appropriate methodologies for the planning of a Services for People Innovation Park-SPIP, designed according to the model proposed by the Ibero-American Network launched by La Salle University of Madrid. Projected to form a network, these parks were conceived to provoke social change in their region, improving quality of life and social welfare, through knowledge, technology and innovation transfer and creation of companies focused on developing product and services to reduce social inequalities. Building a conceptual framework for the identification of planning methodologies compatible with the SPIP problemátique, this article analyses the theories of complex systems and adaptive planning, considering the particularities presented by Innovation Parks. The study deepens the understanding of the problems inherent in park planning, identifies the key issues to be considered during this process, and characterizes the SPIP as active adaptive complex system, suggesting methodologies more appropriate to its planning.

  4. Applying Fuzzy Artificial Neural Network OSPF to develop Smart ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Fuzzy Artificial Neural Network to create Smart Routing. Protocol Algorithm. ... manufactured mental aptitude strategy. The capacity to study .... Based Energy Efficiency in Wireless Sensor Networks: A Survey",. International ...

  5. Response surface methodology applied to the study of the microwave-assisted synthesis of quaternized chitosan.

    Science.gov (United States)

    dos Santos, Danilo Martins; Bukzem, Andrea de Lacerda; Campana-Filho, Sérgio Paulo

    2016-03-15

    A quaternized derivative of chitosan, namely N-(2-hydroxy)-propyl-3-trimethylammonium chitosan chloride (QCh), was synthesized by reacting glycidyltrimethylammonium chloride (GTMAC) and chitosan (Ch) in acid medium under microwave irradiation. Full-factorial 2(3) central composite design and response surface methodology (RSM) were applied to evaluate the effects of molar ratio GTMAC/Ch, reaction time and temperature on the reaction yield, average degree of quaternization (DQ) and intrinsic viscosity ([η]) of QCh. The molar ratio GTMAC/Ch was the most important factor affecting the response variables and RSM results showed that highly substituted QCh (DQ = 71.1%) was produced at high yield (164%) when the reaction was carried out for 30min. at 85°C by using molar ratio GTMAC/Ch 6/1. Results showed that microwave-assisted synthesis is much faster (≤30min.) as compared to conventional reaction procedures (>4h) carried out in similar conditions except for the use of microwave irradiation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Collaborative Learning Network Approach to Improvement: The CUSP Learning Network.

    Science.gov (United States)

    Weaver, Sallie J; Lofthus, Jennifer; Sawyer, Melinda; Greer, Lee; Opett, Kristin; Reynolds, Catherine; Wyskiel, Rhonda; Peditto, Stephanie; Pronovost, Peter J

    2015-04-01

    Collaborative improvement networks draw on the science of collaborative organizational learning and communities of practice to facilitate peer-to-peer learning, coaching, and local adaption. Although significant improvements in patient safety and quality have been achieved through collaborative methods, insight regarding how collaborative networks are used by members is needed. Improvement Strategy: The Comprehensive Unit-based Safety Program (CUSP) Learning Network is a multi-institutional collaborative network that is designed to facilitate peer-to-peer learning and coaching specifically related to CUSP. Member organizations implement all or part of the CUSP methodology to improve organizational safety culture, patient safety, and care quality. Qualitative case studies developed by participating members examine the impact of network participation across three levels of analysis (unit, hospital, health system). In addition, results of a satisfaction survey designed to evaluate member experiences were collected to inform network development. Common themes across case studies suggest that members found value in collaborative learning and sharing strategies across organizational boundaries related to a specific improvement strategy. The CUSP Learning Network is an example of network-based collaborative learning in action. Although this learning network focuses on a particular improvement methodology-CUSP-there is clear potential for member-driven learning networks to grow around other methods or topic areas. Such collaborative learning networks may offer a way to develop an infrastructure for longer-term support of improvement efforts and to more quickly diffuse creative sustainment strategies.

  7. Hybrid SDN Architecture for Resource Consolidation in MPLS Networks

    OpenAIRE

    Katov, Anton Nikolaev; Mihovska, Albena D.; Prasad, Neeli R.

    2015-01-01

    This paper proposes a methodology for resourceconsolidation towards minimizing the power consumption in alarge network, with a substantial resource overprovisioning. Thefocus is on the operation of the core MPLS networks. Theproposed approach is based on a software defined networking(SDN) scheme with a reconfigurable centralized controller, whichturns off certain network elements. The methodology comprisesthe process of identifying time periods with lower traffic demand;the ranking of the net...

  8. Prediction of two-phase mixture density using artificial neural networks

    International Nuclear Information System (INIS)

    Lombardi, C.; Mazzola, A.

    1997-01-01

    In nuclear power plants, the density of boiling mixtures has a significant relevance due to its influence on the neutronic balance, the power distribution and the reactor dynamics. Since the determination of the two-phase mixture density on a purely analytical basis is in fact impractical in many situations of interest, heuristic relationships have been developed based on the parameters describing the two-phase system. However, the best or even a good structure for the correlation cannot be determined in advance, also considering that it is usually desired to represent the experimental data with the most compact equation. A possible alternative to empirical correlations is the use of artificial neural networks, which allow one to model complex systems without requiring the explicit formulation of the relationships existing among the variables. In this work, the neural network methodology was applied to predict the density data of two-phase mixtures up-flowing in adiabatic channels under different experimental conditions. The trained network predicts the density data with a root-mean-square error of 5.33%, being ∼ 93% of the data points predicted within 10%. When compared with those of two conventional well-proven correlations, i.e. the Zuber-Findlay and the CISE correlations, the neural network performances are significantly better. In spite of the good accuracy of the neural network predictions, the 'black-box' characteristic of the neural model does not allow an easy physical interpretation of the knowledge integrated in the network weights. Therefore, the neural network methodology has the advantage of not requiring a formal correlation structure and of giving very accurate results, but at the expense of a loss of model transparency. (author)

  9. Evaluation methodology for tariff design under escalating penetrations of distributed energy resources

    OpenAIRE

    Abdelmotteleb, I.I.A.; Gómez, Tomás; Reneses, Javier

    2017-01-01

    As the penetration of distributed energy resources (DERs) escalates in distribution networks, new network tariffs are needed to cope with this new situation. These tariffs should allocate network costs to users, promoting an efficient use of the distribution network. This paper proposes a methodology to evaluate and compare network tariff designs. Four design attributes are proposed for this aim: (i) network cost recovery; (ii) deferral of network reinforcements; (iii) efficient consumer resp...

  10. Auditing organizational communication: evaluating the methodological strengths and weaknesses of the critical incident technique, network analysis, and the communication satisfaction questionnaire

    NARCIS (Netherlands)

    Koning, K.H.

    2016-01-01

    This dissertation focuses on the methodology of communication audits. In the context of three Dutch high schools, we evaluated several audit instruments. The first study in this dissertation focuses on the question whether the rationale of the critical incident technique (CIT) still applies when it

  11. Using the OASES-A to illustrate how network analysis can be applied to understand the experience of stuttering.

    Science.gov (United States)

    Siew, Cynthia S Q; Pelczarski, Kristin M; Yaruss, J Scott; Vitevitch, Michael S

    Network science uses mathematical and computational techniques to examine how individual entities in a system, represented by nodes, interact, as represented by connections between nodes. This approach has been used by Cramer et al. (2010) to make "symptom networks" to examine various psychological disorders. In the present analysis we examined a network created from the items in the Overall Assessment of the Speaker's Experience of Stuttering-Adult (OASES-A), a commonly used measure for evaluating adverse impact in the lives of people who stutter. The items of the OASES-A were represented as nodes in the network. Connections between nodes were placed if responses to those two items in the OASES-A had a correlation coefficient greater than ±0.5. Several network analyses revealed which nodes were "important" in the network. Several centrally located nodes and "key players" in the network were identified. A community detection analysis found groupings of nodes that differed slightly from the subheadings of the OASES-A. Centrally located nodes and "key players" in the network may help clinicians prioritize treatment. The different community structure found for people who stutter suggests that the way people who stutter view stuttering may differ from the way that scientists and clinicians view stuttering. Finally, the present analyses illustrate how the network approach might be applied to other speech, language, and hearing disorders to better understand how those disorders are experienced and to provide insights for their treatment. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    Directory of Open Access Journals (Sweden)

    Miquel L. Alomar

    2016-01-01

    Full Text Available Hardware implementation of artificial neural networks (ANNs allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC has arisen as a strategic technique to design recurrent neural networks (RNNs with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  13. Ecological network analysis: network construction

    NARCIS (Netherlands)

    Fath, B.D.; Scharler, U.M.; Ulanowicz, R.E.; Hannon, B.

    2007-01-01

    Ecological network analysis (ENA) is a systems-oriented methodology to analyze within system interactions used to identify holistic properties that are otherwise not evident from the direct observations. Like any analysis technique, the accuracy of the results is as good as the data available, but

  14. Attractiveness-Based Airline Network Models with Embedded Spill and Recapture

    Directory of Open Access Journals (Sweden)

    Desmond Di Wang

    2014-01-01

    Full Text Available Purpose: In airline revenue management, the modeling of the spill and recapture effects is essential for an accurate estimation of the passenger flow and the revenue in a flight network. However, as most current approaches toward spill and recapture involve either non-linearity or a tremendous amount of additional variables, it is computationally intractable to apply those techniques to the classical network design and capacity planning models.Design/methodology: We present a new framework that incorporates the spill and recapture effects, where the spill from an itinerary is recaptured by other itineraries based on their attractiveness. The presented framework distributes the accepted demand of an itinerary according to the currently available itineraries, without adding extra variables for the recaptured spill. Due to its compactness, we integrate the framework with the classical capacity planning and network design models.Findings: Our preliminary computational study shows an increase of 1.07% in profitability anda better utilization of the network capacity, on a medium-size North American airline provided by Sabre Airline Solutions.Originality/value: Our investigation leads to a holistic model that tackles the network design and capacity planning simultaneously with an accurate modeling of the spill and re- capture effects.Furthermore, the presented framework for spill and recapture is versatile and can be easily applied to other disciplines such as the hospitality industry and product line design (PLD problems.

  15. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  16. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  17. Disposal criticality analysis methodology's principal isotope burnup credit

    International Nuclear Information System (INIS)

    Doering, T.W.; Thomas, D.A.

    2001-01-01

    This paper presents the burnup credit aspects of the United States Department of Energy Yucca Mountain Project's methodology for performing criticality analyses for commercial light-water-reactor fuel. The disposal burnup credit methodology uses a 'principal isotope' model, which takes credit for the reduced reactivity associated with the build-up of the primary principal actinides and fission products in irradiated fuel. Burnup credit is important to the disposal criticality analysis methodology and to the design of commercial fuel waste packages. The burnup credit methodology developed for disposal of irradiated commercial nuclear fuel can also be applied to storage and transportation of irradiated commercial nuclear fuel. For all applications a series of loading curves are developed using a best estimate methodology and depending on the application, an additional administrative safety margin may be applied. The burnup credit methodology better represents the 'true' reactivity of the irradiated fuel configuration, and hence the real safety margin, than do evaluations using the 'fresh fuel' assumption. (author)

  18. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  19. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  20. Dynamical Networks Characterization of Space Weather Events

    Science.gov (United States)

    Orr, L.; Chapman, S. C.; Dods, J.; Gjerloev, J. W.

    2017-12-01

    Space weather can cause disturbances to satellite systems, impacting navigation technology and telecommunications; it can cause power loss and aviation disruption. A central aspect of the earth's magnetospheric response to space weather events are large scale and rapid changes in ionospheric current patterns. Space weather is highly dynamic and there are still many controversies about how the current system evolves in time. The recent SuperMAG initiative, collates ground-based vector magnetic field time series from over 200 magnetometers with 1-minute temporal resolution. In principle this combined dataset is an ideal candidate for quantification using dynamical networks. Network properties and parameters allow us to characterize the time dynamics of the full spatiotemporal pattern of the ionospheric current system. However, applying network methodologies to physical data presents new challenges. We establish whether a given pair of magnetometers are connected in the network by calculating their canonical cross correlation. The magnetometers are connected if their cross correlation exceeds a threshold. In our physical time series this threshold needs to be both station specific, as it varies with (non-linear) individual station sensitivity and location, and able to vary with season, which affects ground conductivity. Additionally, the earth rotates and therefore the ground stations move significantly on the timescales of geomagnetic disturbances. The magnetometers are non-uniformly spatially distributed. We will present new methodology which addresses these problems and in particular achieves dynamic normalization of the physical time series in order to form the network. Correlated disturbances across the magnetometers capture transient currents. Once the dynamical network has been obtained [1][2] from the full magnetometer data set it can be used to directly identify detailed inferred transient ionospheric current patterns and track their dynamics. We will show

  1. Socio technical modelling of a nuclear: case study applied to the Ionizing Radiation Metrology National Laboratory; Modelagem sociotecnica de uma organizacao nuclear: estudo de caso aplicado ao Laboratorio Nacional de Metrologia das Radiacoes Ionizantes

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Maria Elizabeth Dias

    2015-07-01

    A methodology combining process mapping and analysis; knowledge elicitation mapping and critical analysis; and socio technical analysis based on social network analysis was conceived. The methodology was applied to a small knowledge intensive organization - LNMRI, and has allowed the appraisal of the main intellectual assets and their ability to evolve. In this sense, based on real issues such as attrition, the impacts of probable future scenarios were assessed. For such task, a multimodal network of processes, knowledge objects and people was analyzed using a set of appropriate metrics and means, including sphere of influence of key nodes. To differentiate the ability of people's role playing in the processes, some nodes' attributes were used to provide partition criteria for the network and thus the ability to differentiate the impact of potential loss of supervisors and operators. The proposed methodology has allowed for: 1) the identification of knowledge objects and their sources; 2) mapping and ranking of these objects according to their relevance and 3) the assessment of vulnerabilities in LNMRI's network structure and 4) revealing of informal mechanisms of knowledge sharing The conceived methodological framework has proved to be a robust tool for a broad diagnosis to support succession planning and also the organizational strategic planning. (author)

  2. Coevolution of Epidemics, Social Networks, and Individual Behavior: A Case Study

    Science.gov (United States)

    Chen, Jiangzhuo; Marathe, Achla; Marathe, Madhav

    This research shows how a limited supply of antivirals can be distributed optimally between the hospitals and the market so that the attack rate is minimized and enough revenue is generated to recover the cost of the antivirals. Results using an individual based model find that prevalence elastic demand behavior delays the epidemic and change in the social contact network induced by isolation reduces the peak of the epidemic significantly. A microeconomic analysis methodology combining behavioral economics and agent-based simulation is a major contribution of this work. In this paper we apply this methodology to analyze the fairness of the stockpile distribution, and the response of human behavior to disease prevalence level and its interaction with the market.

  3. Queueing networks a fundamental approach

    CERN Document Server

    Dijk, Nico

    2011-01-01

    This handbook aims to highlight fundamental, methodological and computational aspects of networks of queues to provide insights and to unify results that can be applied in a more general manner.  The handbook is organized into five parts: Part 1 considers exact analytical results such as of product form type. Topics include characterization of product forms by physical balance concepts and simple traffic flow equations, classes of service and queue disciplines that allow a product form, a unified description of product forms for discrete time queueing networks, insights for insensitivity, and aggregation and decomposition results that allow subnetworks to be aggregated into single nodes to reduce computational burden. Part 2 looks at monotonicity and comparison results such as for computational simplification by either of two approaches: stochastic monotonicity and ordering results based on the ordering of the proces generators, and comparison results and explicit error bounds based on an underlying Markov r...

  4. A Design Methodology for Computer Security Testing

    OpenAIRE

    Ramilli, Marco

    2013-01-01

    The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. ...

  5. Accelerating networks

    International Nuclear Information System (INIS)

    Smith, David M D; Onnela, Jukka-Pekka; Johnson, Neil F

    2007-01-01

    Evolving out-of-equilibrium networks have been under intense scrutiny recently. In many real-world settings the number of links added per new node is not constant but depends on the time at which the node is introduced in the system. This simple idea gives rise to the concept of accelerating networks, for which we review an existing definition and-after finding it somewhat constrictive-offer a new definition. The new definition provided here views network acceleration as a time dependent property of a given system as opposed to being a property of the specific algorithm applied to grow the network. The definition also covers both unweighted and weighted networks. As time-stamped network data becomes increasingly available, the proposed measures may be easily applied to such empirical datasets. As a simple case study we apply the concepts to study the evolution of three different instances of Wikipedia, namely, those in English, German, and Japanese, and find that the networks undergo different acceleration regimes in their evolution

  6. Fourth international conference on Networks & Communications

    CERN Document Server

    Meghanathan, Natarajan; Nagamalai, Dhinaharan; Computer Networks & Communications (NetCom)

    2013-01-01

    Computer Networks & Communications (NetCom) is the proceedings from the Fourth International Conference on Networks & Communications. This book covers theory, methodology and applications of computer networks, network protocols and wireless networks, data communication technologies, and network security. The proceedings will feature peer-reviewed papers that illustrate research results, projects, surveys and industrial experiences that describe significant advances in the diverse areas of computer networks & communications.

  7. Identification of Functional Information Subgraphs in Complex Networks

    International Nuclear Information System (INIS)

    Bettencourt, Luis M. A.; Gintautas, Vadas; Ham, Michael I.

    2008-01-01

    We present a general information theoretic approach for identifying functional subgraphs in complex networks. We show that the uncertainty in a variable can be written as a sum of information quantities, where each term is generated by successively conditioning mutual informations on new measured variables in a way analogous to a discrete differential calculus. The analogy to a Taylor series suggests efficient optimization algorithms for determining the state of a target variable in terms of functional groups of other nodes. We apply this methodology to electrophysiological recordings of cortical neuronal networks grown in vitro. Each cell's firing is generally explained by the activity of a few neurons. We identify these neuronal subgraphs in terms of their redundant or synergetic character and reconstruct neuronal circuits that account for the state of target cells

  8. Artificial neural network and response surface methodology modeling in mass transfer parameters predictions during osmotic dehydration of Carica papaya L.

    Directory of Open Access Journals (Sweden)

    J. Prakash Maran

    2013-09-01

    Full Text Available In this study, a comparative approach was made between artificial neural network (ANN and response surface methodology (RSM to predict the mass transfer parameters of osmotic dehydration of papaya. The effects of process variables such as temperature, osmotic solution concentration and agitation speed on water loss, weight reduction, and solid gain during osmotic dehydration were investigated using a three-level three-factor Box-Behnken experimental design. Same design was utilized to train a feed-forward multilayered perceptron (MLP ANN with back-propagation algorithm. The predictive capabilities of the two methodologies were compared in terms of root mean square error (RMSE, mean absolute error (MAE, standard error of prediction (SEP, model predictive error (MPE, chi square statistic (χ2, and coefficient of determination (R2 based on the validation data set. The results showed that properly trained ANN model is found to be more accurate in prediction as compared to RSM model.

  9. Blanket safety by GEMSAFE methodology

    International Nuclear Information System (INIS)

    Sawada, Tetsuo; Saito, Masaki

    2001-01-01

    General Methodology of Safety Analysis and Evaluation for Fusion Energy Systems (GEMSAFE) has been applied to a number of fusion system designs, such as R-tokamak, Fusion Experimental Reactor (FER), and the International Thermonuclear Experimental Reactor (ITER) designs in the both stages of Conceptual Design Activities (CDA) and Engineering Design Activities (EDA). Though the major objective of GEMSAFE is to reasonably select design basis events (DBEs) it is also useful to elucidate related safety functions as well as requirements to ensure its safety. In this paper, we apply the methodology to fusion systems with future tritium breeding blankets and make clear which points of the system should be of concern from safety ensuring point of view. In this context, we have obtained five DBEs that are related to the blanket system. We have also clarified the safety functions required to prevent accident propagations initiated by those blanket-specific DBEs. The outline of the methodology is also reviewed. (author)

  10. Network clustering coefficient approach to DNA sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gerhardt, Guenther J.L. [Universidade Federal do Rio Grande do Sul-Hospital de Clinicas de Porto Alegre, Rua Ramiro Barcelos 2350/sala 2040/90035-003 Porto Alegre (Brazil); Departamento de Fisica e Quimica da Universidade de Caxias do Sul, Rua Francisco Getulio Vargas 1130, 95001-970 Caxias do Sul (Brazil); Lemke, Ney [Programa Interdisciplinar em Computacao Aplicada, Unisinos, Av. Unisinos, 950, 93022-000 Sao Leopoldo, RS (Brazil); Corso, Gilberto [Departamento de Biofisica e Farmacologia, Centro de Biociencias, Universidade Federal do Rio Grande do Norte, Campus Universitario, 59072 970 Natal, RN (Brazil)]. E-mail: corso@dfte.ufrn.br

    2006-05-15

    In this work we propose an alternative DNA sequence analysis tool based on graph theoretical concepts. The methodology investigates the path topology of an organism genome through a triplet network. In this network, triplets in DNA sequence are vertices and two vertices are connected if they occur juxtaposed on the genome. We characterize this network topology by measuring the clustering coefficient. We test our methodology against two main bias: the guanine-cytosine (GC) content and 3-bp (base pairs) periodicity of DNA sequence. We perform the test constructing random networks with variable GC content and imposed 3-bp periodicity. A test group of some organisms is constructed and we investigate the methodology in the light of the constructed random networks. We conclude that the clustering coefficient is a valuable tool since it gives information that is not trivially contained in 3-bp periodicity neither in the variable GC content.

  11. Service Innovation Methodologies II : How can new product development methodologies be applied to service innovation and new service development? : Report no 2 from the TIPVIS-project

    OpenAIRE

    Nysveen, Herbjørn; Pedersen, Per E.; Aas, Tor Helge

    2007-01-01

    This report presents various methodologies used in new product development and product innovation and discusses the relevance of these methodologies for service development and service innovation. The service innovation relevance for all of the methodologies presented is evaluated along several service specific dimensions, like intangibility, inseparability, heterogeneity, perishability, information intensity, and co-creation. The methodologies discussed are mainly collect...

  12. Hazard tolerance of spatially distributed complex networks

    International Nuclear Information System (INIS)

    Dunn, Sarah; Wilkinson, Sean

    2017-01-01

    In this paper, we present a new methodology for quantifying the reliability of complex systems, using techniques from network graph theory. In recent years, network theory has been applied to many areas of research and has allowed us to gain insight into the behaviour of real systems that would otherwise be difficult or impossible to analyse, for example increasingly complex infrastructure systems. Although this work has made great advances in understanding complex systems, the vast majority of these studies only consider a systems topological reliability and largely ignore their spatial component. It has been shown that the omission of this spatial component can have potentially devastating consequences. In this paper, we propose a number of algorithms for generating a range of synthetic spatial networks with different topological and spatial characteristics and identify real-world networks that share the same characteristics. We assess the influence of nodal location and the spatial distribution of highly connected nodes on hazard tolerance by comparing our generic networks to benchmark networks. We discuss the relevance of these findings for real world networks and show that the combination of topological and spatial configurations renders many real world networks vulnerable to certain spatial hazards. - Highlights: • We develop a method for quantifying the reliability of real-world systems. • We assess the spatial resilience of synthetic spatially distributed networks. • We form algorithms to generate spatial scale-free and exponential networks. • We show how these “synthetic” networks are proxies for real world systems. • Conclude that many real world systems are vulnerable to spatially coherent hazard.

  13. Methodology applied by IRSN for nuclear accident cost estimations in France

    International Nuclear Information System (INIS)

    2013-01-01

    This report describes the methodology used by IRSN to estimate the cost of potential nuclear accidents in France. It concerns possible accidents involving pressurized water reactors leading to radioactive releases in the environment. These accidents have been grouped in two accident families called: severe accidents and major accidents. Two model scenarios have been selected to represent each of these families. The report discusses the general methodology of nuclear accident cost estimation. The crucial point is that all cost should be considered: if not, the cost is underestimated which can lead to negative consequences for the value attributed to safety and for crisis preparation. As a result, the overall cost comprises many components: the most well-known is offsite radiological costs, but there are many others. The proposed estimates have thus required using a diversity of methods which are described in this report. Figures are presented at the end of this report. Among other things, they show that purely radiological costs only represent a non-dominant part of foreseeable economic consequences

  14. Network workshop

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry

    2014-01-01

    This paper describes the background for, realisation of and author reflections on a network workshop held at ESERA2013. As a new research area in science education, networks offer a unique opportunity to visualise and find patterns and relationships in complicated social or academic network data....... These include student relations and interactions and epistemic and linguistic networks of words, concepts and actions. Network methodology has already found use in science education research. However, while networks hold the potential for new insights, they have not yet found wide use in the science education...... research community. With this workshop, participants were offered a way into network science based on authentic educational research data. The workshop was constructed as an inquiry lesson with emphasis on user autonomy. Learning activities had participants choose to work with one of two cases of networks...

  15. Statistical network analysis for analyzing policy networks

    DEFF Research Database (Denmark)

    Robins, Garry; Lewis, Jenny; Wang, Peng

    2012-01-01

    and policy network methodology is the development of statistical modeling approaches that can accommodate such dependent data. In this article, we review three network statistical methods commonly used in the current literature: quadratic assignment procedures, exponential random graph models (ERGMs......To analyze social network data using standard statistical approaches is to risk incorrect inference. The dependencies among observations implied in a network conceptualization undermine standard assumptions of the usual general linear models. One of the most quickly expanding areas of social......), and stochastic actor-oriented models. We focus most attention on ERGMs by providing an illustrative example of a model for a strategic information network within a local government. We draw inferences about the structural role played by individuals recognized as key innovators and conclude that such an approach...

  16. Analysis of the interaction between experimental and applied behavior analysis.

    Science.gov (United States)

    Virues-Ortega, Javier; Hurtado-Parrado, Camilo; Cox, Alison D; Pear, Joseph J

    2014-01-01

    To study the influences between basic and applied research in behavior analysis, we analyzed the coauthorship interactions of authors who published in JABA and JEAB from 1980 to 2010. We paid particular attention to authors who published in both JABA and JEAB (dual authors) as potential agents of cross-field interactions. We present a comprehensive analysis of dual authors' coauthorship interactions using social networks methodology and key word analysis. The number of dual authors more than doubled (26 to 67) and their productivity tripled (7% to 26% of JABA and JEAB articles) between 1980 and 2010. Dual authors stood out in terms of number of collaborators, number of publications, and ability to interact with multiple groups within the field. The steady increase in JEAB and JABA interactions through coauthors and the increasing range of topics covered by dual authors provide a basis for optimism regarding the progressive integration of basic and applied behavior analysis. © Society for the Experimental Analysis of Behavior.

  17. A flow-based methodology for the calculation of TSO to TSO compensations for cross-border flows

    International Nuclear Information System (INIS)

    Glavitsch, H.; Andersson, G.; Lekane, Th.; Marien, A.; Mees, E.; Naef, U.

    2004-01-01

    In the context of the development of the European internal electricity market, several methods for the tarification of cross-border flows have been proposed. This paper presents a flow-based method for the calculation of TSO to TSO compensations for cross-border flows. The basic principle of this approach is the allocation of the costs of cross-border flows to the TSOs who are responsible for these flows. This method is cost reflective, non-transaction based and compatible with domestic tariffs. It can be applied when limited data are available. Each internal transmission network is then modelled as an aggregated node, called 'supernode', and the European network is synthesized by a graph of supernodes and arcs, each arc representing all cross-border lines between two adjacent countries. When detailed data are available, the proposed methodology is also applicable to all the nodes and lines of the transmission network. Costs associated with flows transiting through supernodes or network elements are forwarded through the network in a way reflecting how the flows make use of the network. The costs can be charged either towards loads and exports or towards generations and imports. Combination of the two charging directions can also be considered. (author)

  18. Applying demand side management using a generalised grid supportive approach

    NARCIS (Netherlands)

    Blaauwbroek, N.; Nguyen, H.P.; Slootweg, J.G.

    2017-01-01

    Demand side management is often seen as a promising tool for distribution network operators to mitigate network operation limit violations. Many demand side management applications have been proposed, each with their own objectives and methodology. Quite often, these demand side management

  19. Fifth International Conference on Networks & Communications

    CERN Document Server

    Nagamalai, Dhinaharan; Rajasekaran, Sanguthevar

    2014-01-01

    This book covers theory, methodology and applications of computer networks, network protocols and wireless networks, data communication technologies, and network security. The book is based on the proceedings from the Fifth International Conference on Networks & Communications (NetCom). The proceedings will feature peer-reviewed papers that illustrate research results, projects, surveys and industrial experiences that describe significant advances in the diverse areas of computer networks & communications.

  20. Methodology for risk assessment and reliability applied for pipeline engineering design and industrial valves operation

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, Dierci [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Escola de Engenharia Industrial e Metalurgia. Lab. de Sistemas de Producao e Petroleo e Gas], e-mail: dsilveira@metal.eeimvr.uff.br; Batista, Fabiano [CICERO, Rio das Ostras, RJ (Brazil)

    2009-07-01

    Two kinds of situations may be distinguished for estimating the operating reliability when maneuvering industrial valves and the probability of undesired events in pipelines and industrial plants: situations in which the risk is identified in repetitive cycles of operations and situations in which there is a permanent hazard due to project configurations introduced by decisions during the engineering design definition stage. The estimation of reliability based on the influence of design options requires the choice of a numerical index, which may include a composite of human operating parameters based on biomechanics and ergonomics data. We first consider the design conditions under which the plant or pipeline operator reliability concepts can be applied when operating industrial valves, and then describe in details the ergonomics and biomechanics risks that would lend itself to engineering design database development and human reliability modeling and assessment. This engineering design database development and reliability modeling is based on a group of engineering design and biomechanics parameters likely to lead to over-exertion forces and working postures, which are themselves associated with the functioning of a particular plant or pipeline. This approach to construct based on ergonomics and biomechanics for a more common industrial valve positioning in the plant layout is proposed through the development of a methodology to assess physical efforts and operator reach, combining various elementary operations situations. These procedures can be combined with the genetic algorithm modeling and four elements of the man-machine systems: the individual, the task, the machinery and the environment. The proposed methodology should be viewed not as competing to traditional reliability and risk assessment bur rather as complementary, since it provides parameters related to physical efforts values for valves operation and workspace design and usability. (author)

  1. Actor/Actant-Network Theory as Emerging Methodology for ...

    African Journals Online (AJOL)

    4carolinebell@gmail.com

    2005-01-31

    Jan 31, 2005 ... to trace relationships, actors, actants and actor/actant-networks .... associated with a particular type of social theory (Latour, 1987; ..... the Department of Environmental Affairs and Tourism, Organised Business and Organised.

  2. Modelling biochemical networks with intrinsic time delays: a hybrid semi-parametric approach

    Directory of Open Access Journals (Sweden)

    Oliveira Rui

    2010-09-01

    Full Text Available Abstract Background This paper presents a method for modelling dynamical biochemical networks with intrinsic time delays. Since the fundamental mechanisms leading to such delays are many times unknown, non conventional modelling approaches become necessary. Herein, a hybrid semi-parametric identification methodology is proposed in which discrete time series are incorporated into fundamental material balance models. This integration results in hybrid delay differential equations which can be applied to identify unknown cellular dynamics. Results The proposed hybrid modelling methodology was evaluated using two case studies. The first of these deals with dynamic modelling of transcriptional factor A in mammalian cells. The protein transport from the cytosol to the nucleus introduced a delay that was accounted for by discrete time series formulation. The second case study focused on a simple network with distributed time delays that demonstrated that the discrete time delay formalism has broad applicability to both discrete and distributed delay problems. Conclusions Significantly better prediction qualities of the novel hybrid model were obtained when compared to dynamical structures without time delays, being the more distinctive the more significant the underlying system delay is. The identification of the system delays by studies of different discrete modelling delays was enabled by the proposed structure. Further, it was shown that the hybrid discrete delay methodology is not limited to discrete delay systems. The proposed method is a powerful tool to identify time delays in ill-defined biochemical networks.

  3. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  4. Randomizing growing networks with a time-respecting null model

    Science.gov (United States)

    Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš

    2018-05-01

    Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.

  5. Comparison analysis on vulnerability of metro networks based on complex network

    Science.gov (United States)

    Zhang, Jianhua; Wang, Shuliang; Wang, Xiaoyuan

    2018-04-01

    This paper analyzes the networked characteristics of three metro networks, and two malicious attacks are employed to investigate the vulnerability of metro networks based on connectivity vulnerability and functionality vulnerability. Meanwhile, the networked characteristics and vulnerability of three metro networks are compared with each other. The results show that Shanghai metro network has the largest transport capacity, Beijing metro network has the best local connectivity and Guangzhou metro network has the best global connectivity, moreover Beijing metro network has the best homogeneous degree distribution. Furthermore, we find that metro networks are very vulnerable subjected to malicious attacks, and Guangzhou metro network has the best topological structure and reliability among three metro networks. The results indicate that the proposed methodology is feasible and effective to investigate the vulnerability and to explore better topological structure of metro networks.

  6. ANP applied to electronics engineering project selection

    International Nuclear Information System (INIS)

    Habib, M.

    2010-01-01

    Project selection in Electronics engineering is a complex decision-making process. This research paper illustrates an application of ANP/AHP process. The AHP (Analytic Hierarchy Process) is employed to break down large unstructured decision problems into manageable and measureable components. The ANP, as the general form of AHP, is powerful to deal with complex decisions where interdependence exists in a decision model. The research paper discusses the use of the ANP, a general form of Saaty's analytic Network process, as a model to evaluate the value of competing Electronics projects. The research paper concludes with a case study describing the implementation of this model at an engineering college, including data based on the actual use of the decision making model. The case study helps to verify that AHP is an effective and efficient decision-making tool. A major contribution of this work is to provide a methodology for assessing the best project. Despite a number of publications applying AHP in project selection, this is probably the first time that an attempt has been made to apply AHP in an electronics project selection in an engineering university environment. (author)

  7. From a Real Deployment to a Downscaled Testbed : A Methodological Approach

    NARCIS (Netherlands)

    Stajkic, Andrea; Abrignani, Melchiorre Danilo; Buratti, Chiara; Bettinelli, Andrea; Vigo, Daniele; Verdone, Roberto

    2016-01-01

    This paper proposes a novel methodology for the spatial downscaling of real-world deployments of wireless networks, running protocols, and/or applications for the Internet of Things (IoT). These networks are often deployed in environments not easily accessible and highly unpredictable, where doing

  8. A robust methodology for modal parameters estimation applied to SHM

    Science.gov (United States)

    Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

    2017-10-01

    The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

  9. Heat exchanger networks design with constraints

    International Nuclear Information System (INIS)

    Amidpur, M.; Zoghi, A.; Nasiri, N.

    2000-01-01

    So far there have been two approaches to the problem of heat recovery system design where stream matching constraints exist. The first approach involves mathematical techniques for solving the combinational problem taking due recognition of the constraints. These methodologies are now efficient, still suffer from the problem of taking a significant amount of control and direction away from the designer. The second approach based upon so called pinch technology and involves the use of adaptation of standard problem table algorithm. Unfortunately, the proposed methodologies are not very easy to understand, therefore they fail to provide the insight and generally associated with these approaches. Here, a new pinch based methodology is presented. In this method, we modified the traditional numerical targeting procedure-problem table algorithm which is stream cascade table. Unconstrained groups are established by using of artificial intelligence method such that they have minimum utility consumption among different alternatives. Each group is an individual network, therefore, traditional optimization, used in pinch technology, should be employed. By transferring energy between groups heat recovery can be maximized, then each group designs individually and finally networks combine together. One of the advantages of using this method is simple targeting and easy networks-design. Besides the approach has the potential using of new network design methods such as dual temperature approach, flexible pinch design, pseudo pinch design. It is hoped that this methodology provides insight easy network design

  10. Intentional risk management through complex networks analysis

    CERN Document Server

    Chapela, Victor; Moral, Santiago; Romance, Miguel

    2015-01-01

    This book combines game theory and complex networks to examine intentional technological risk through modeling. As information security risks are in constant evolution,  the methodologies and tools to manage them must evolve to an ever-changing environment. A formal global methodology is explained  in this book, which is able to analyze risks in cyber security based on complex network models and ideas extracted from the Nash equilibrium. A risk management methodology for IT critical infrastructures is introduced which provides guidance and analysis on decision making models and real situations. This model manages the risk of succumbing to a digital attack and assesses an attack from the following three variables: income obtained, expense needed to carry out an attack, and the potential consequences for an attack. Graduate students and researchers interested in cyber security, complex network applications and intentional risk will find this book useful as it is filled with a number of models, methodologies a...

  11. Applying Lean-Six-Sigma Methodology in radiotherapy: Lessons learned by the breast daily repositioning case.

    Science.gov (United States)

    Mancosu, Pietro; Nicolini, Giorgia; Goretti, Giulia; De Rose, Fiorenza; Franceschini, Davide; Ferrari, Chiara; Reggiori, Giacomo; Tomatis, Stefano; Scorsetti, Marta

    2018-03-06

    Lean Six Sigma Methodology (LSSM) was introduced in industry to provide near-perfect services to large processes, by reducing improbable occurrence. LSSM has been applied to redesign the 2D-2D breast repositioning process (Lean) by the retrospective analysis of the database (Six Sigma). Breast patients with daily 2D-2D matching before RT were considered. The five DMAIC (define, measure, analyze, improve, and control) LSSM steps were applied. The process was retrospectively measured over 30 months (7/2014-12/2016) by querying the RT Record&Verify database. Two Lean instruments (Poka-Yoke and Visual Management) were considered for advancing the process. The new procedure was checked over 6 months (1-6/2017). 14,931 consecutive shifts from 1342 patients were analyzed. Only 0.8% of patients presented median shifts >1 cm. The major observed discrepancy was the monthly percentage of fractions with almost zero shifts (AZS = 13.2% ± 6.1%). Ishikawa fishbone diagram helped in defining the main discrepancy con-causes. Procedure harmonization involving a multidisciplinary team to increase confidence in matching procedure was defined. AZS was reduced to 4.8% ± 0.6%. Furthermore, distribution symmetry improvement (Skewness moved from 1.4 to 1.1) and outlier reduction, verified by Kurtosis diminution, demonstrated a better "normalization" of the procedure after the LSSM application. LSSM was implemented in a RT department, allowing to redesign the breast repositioning matching procedure. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  13. A multi-methodological MR resting state network analysis to assess the changes in brain physiology of children with ADHD.

    Directory of Open Access Journals (Sweden)

    Benito de Celis Alonso

    Full Text Available The purpose of this work was to highlight the neurological differences between the MR resting state networks of a group of children with ADHD (pre-treatment and an age-matched healthy group. Results were obtained using different image analysis techniques. A sample of n = 46 children with ages between 6 and 12 years were included in this study (23 per cohort. Resting state image analysis was performed using ReHo, ALFF and ICA techniques. ReHo and ICA represent connectivity analyses calculated with different mathematical approaches. ALFF represents an indirect measurement of brain activity. The ReHo and ICA analyses suggested differences between the two groups, while the ALFF analysis did not. The ReHo and ALFF analyses presented differences with respect to the results previously reported in the literature. ICA analysis showed that the same resting state networks that appear in healthy volunteers of adult age were obtained for both groups. In contrast, these networks were not identical when comparing the healthy and ADHD groups. These differences affected areas for all the networks except the Right Memory Function network. All techniques employed in this study were used to monitor different cerebral regions which participate in the phenomenological characterization of ADHD patients when compared to healthy controls. Results from our three analyses indicated that the cerebellum and mid-frontal lobe bilaterally for ReHo, the executive function regions in ICA, and the precuneus, cuneus and the clacarine fissure for ALFF, were the "hubs" in which the main inter-group differences were found. These results do not just help to explain the physiology underlying the disorder but open the door to future uses of these methodologies to monitor and evaluate patients with ADHD.

  14. Design and analysis of heat exchanger networks for integrated Ca-looping systems

    International Nuclear Information System (INIS)

    Lara, Yolanda; Lisbona, Pilar; Martínez, Ana; Romeo, Luis M.

    2013-01-01

    Highlights: • Heat integration is essential to minimize energy penalties in calcium looping cycles. • A design and analysis of four heat exchanger networks is stated. • New design with higher power, lower costs and lower destroyed exergy than base case. - Abstract: One of the main challenges of carbon capture and storage technologies deals with the energy penalty associated with CO 2 separation and compression processes. Thus, heat integration plays an essential role in the improvement of these systems’ efficiencies. CO 2 capture systems based on Ca-looping process present a great potential for residual heat integration with a new supercritical power plant. The pinch methodology is applied in this study to define the minimum energy requirements of the process and to design four configurations for the required heat exchanger network. The Second Law of Thermodynamics represents a powerful tool for reducing the energy demand since identifying the exergy losses of the system serves to allocate inefficiencies. In parallel, an economic analysis is required to asses the cost reduction achieved by each configuration. This work presents a combination of pinch methodology with economic and exergetic analyses to select the more appropriate configuration of heat exchanger network. The lower costs and minor destroyed exergy obtained for the best proposed network result in a of 0.91% global energy efficiency increase

  15. Methodology for the economic optimisation of energy storage systems for frequency support in wind power plants

    International Nuclear Information System (INIS)

    Johnston, Lewis; Díaz-González, Francisco; Gomis-Bellmunt, Oriol; Corchero-García, Cristina; Cruz-Zambrano, Miguel

    2015-01-01

    Highlights: • Optimisation of energy storage system with wind power plant for frequency response. • Energy storage option considered could be economically viable. • For a 50 MW wind farm, an energy storage system of 5.3 MW and 3 MW h was found. - Abstract: This paper proposes a methodology for the economic optimisation of the sizing of Energy Storage Systems (ESSs) whilst enhancing the participation of Wind Power Plants (WPP) in network primary frequency control support. The methodology was designed flexibly, so it can be applied to different energy markets and to include different ESS technologies. The methodology includes the formulation and solving of a Linear Programming (LP) problem. The methodology was applied to the particular case of a 50 MW WPP, equipped with a Vanadium Redox Flow battery (VRB) in the UK energy market. Analysis is performed considering real data on the UK regular energy market and the UK frequency response market. Data for wind power generation and energy storage costs are estimated from literature. Results suggest that, under certain assumptions, ESSs can be profitable for the operator of a WPP that is providing frequency response. The ESS provides power reserves such that the WPP can generate close to the maximum energy available. The solution of the optimisation problem establishes that an ESS with a power rating of 5.3 MW and energy capacity of about 3 MW h would be enough to provide such service whilst maximising the incomes for the WPP operator considering the regular and frequency regulation UK markets

  16. A methodology for treating missing data applied to daily rainfall data in the Candelaro River Basin (Italy).

    Science.gov (United States)

    Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe

    2010-01-01

    Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.

  17. Research on energy stock market associated network structure based on financial indicators

    Science.gov (United States)

    Xi, Xian; An, Haizhong

    2018-01-01

    A financial market is a complex system consisting of many interacting units. In general, due to the various types of information exchange within the industry, there is a relationship between the stocks that can reveal their clear structural characteristics. Complex network methods are powerful tools for studying the internal structure and function of the stock market, which allows us to better understand the stock market. Applying complex network methodology, a stock associated network model based on financial indicators is created. Accordingly, we set threshold value and use modularity to detect the community network, and we analyze the network structure and community cluster characteristics of different threshold situations. The study finds that the threshold value of 0.7 is the abrupt change point of the network. At the same time, as the threshold value increases, the independence of the community strengthens. This study provides a method of researching stock market based on the financial indicators, exploring the structural similarity of financial indicators of stocks. Also, it provides guidance for investment and corporate financial management.

  18. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  19. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis.

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-21

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  20. Theory and design of broadband matching networks applied electricity and electronics

    CERN Document Server

    Chen, Wai-Kai

    1976-01-01

    Theory and Design of Broadband Matching Networks centers on the network theory and its applications to the design of broadband matching networks and amplifiers. Organized into five chapters, this book begins with a description of the foundation of network theory. Chapter 2 gives a fairly complete exposition of the scattering matrix associated with an n-port network. Chapter 3 considers the approximation problem along with a discussion of the approximating functions. Chapter 4 explains the Youla's theory of broadband matching by illustrating every phase of the theory with fully worked out examp

  1. Paraconsistents artificial neural networks applied to the study of mutational patterns of the F subtype of the viral strains of HIV-1 to antiretroviral therapy

    Directory of Open Access Journals (Sweden)

    PAULO C.C. DOS SANTOS

    2016-03-01

    Full Text Available ABSTRACT The high variability of HIV-1 as well as the lack of efficient repair mechanisms during the stages of viral replication, contribute to the rapid emergence of HIV-1 strains resistant to antiretroviral drugs. The selective pressure exerted by the drug leads to fixation of mutations capable of imparting varying degrees of resistance. The presence of these mutations is one of the most important factors in the failure of therapeutic response to medications. Thus, it is of critical to understand the resistance patterns and mechanisms associated with them, allowing the choice of an appropriate therapeutic scheme, which considers the frequency, and other characteristics of mutations. Utilizing Paraconsistents Artificial Neural Networks, seated in Paraconsistent Annotated Logic Et which has the capability of measuring uncertainties and inconsistencies, we have achieved levels of agreement above 90% when compared to the methodology proposed with the current methodology used to classify HIV-1 subtypes. The results demonstrate that Paraconsistents Artificial Neural Networks can serve as a promising tool of analysis.

  2. Trapped modes in linear quantum stochastic networks with delays

    Energy Technology Data Exchange (ETDEWEB)

    Tabak, Gil [Stanford University, Department of Applied Physics, Stanford, CA (United States); Mabuchi, Hideo

    2016-12-15

    Networks of open quantum systems with feedback have become an active area of research for applications such as quantum control, quantum communication and coherent information processing. A canonical formalism for the interconnection of open quantum systems using quantum stochastic differential equations (QSDEs) has been developed by Gough, James and co-workers and has been used to develop practical modeling approaches for complex quantum optical, microwave and optomechanical circuits/networks. In this paper we fill a significant gap in existing methodology by showing how trapped modes resulting from feedback via coupled channels with finite propagation delays can be identified systematically in a given passive linear network. Our method is based on the Blaschke-Potapov multiplicative factorization theorem for inner matrix-valued functions, which has been applied in the past to analog electronic networks. Our results provide a basis for extending the Quantum Hardware Description Language (QHDL) framework for automated quantum network model construction (Tezak et al. in Philos. Trans. R. Soc. A, Math. Phys. Eng. Sci. 370(1979):5270-5290, 2012) to efficiently treat scenarios in which each interconnection of components has an associated signal propagation time delay. (orig.)

  3. Understanding information exchange during disaster response: Methodological insights from infocentric analysis

    Science.gov (United States)

    Toddi A. Steelman; Branda Nowell; Deena. Bayoumi; Sarah. McCaffrey

    2014-01-01

    We leverage economic theory, network theory, and social network analytical techniques to bring greater conceptual and methodological rigor to understand how information is exchanged during disasters. We ask, "How can information relationships be evaluated more systematically during a disaster response?" "Infocentric analysis"—a term and...

  4. A methodology to incorporate organizational factors into human reliability analysis

    International Nuclear Information System (INIS)

    Li Pengcheng; Chen Guohua; Zhang Li; Xiao Dongsheng

    2010-01-01

    A new holistic methodology for Human Reliability Analysis (HRA) is proposed to model the effects of the organizational factors on the human reliability. Firstly, a conceptual framework is built, which is used to analyze the causal relationships between the organizational factors and human reliability. Then, the inference model for Human Reliability Analysis is built by combining the conceptual framework with Bayesian networks, which is used to execute the causal inference and diagnostic inference of human reliability. Finally, a case example is presented to demonstrate the specific application of the proposed methodology. The results show that the proposed methodology of combining the conceptual model with Bayesian Networks can not only easily model the causal relationship between organizational factors and human reliability, but in a given context, people can quantitatively measure the human operational reliability, and identify the most likely root causes or the prioritization of root causes caused human error. (authors)

  5. Gender roles in social network sites from generation Y

    Directory of Open Access Journals (Sweden)

    F. Javier Rondan-Cataluña

    2017-12-01

    Full Text Available One of the fundamental and most commonly used communication tools by the generation Y or Millennials are online social networks. The first objective of this study is to model the effects that exercise social participation, community integration and trust in community satisfaction, as an antecedent of routinization. Besides, we propose as a second objective checking if gender roles proposed to underlie the different behaviors that develop social network users. An empirical study was carried out on a sample of 1,448 undergraduate students that are SNS users from Generation Y. First, we applied a structural equation modeling approach to test the proposed model. Second, we followed a methodology using a scale of masculinity and femininity to categorize the sample obtaining three groups: feminine, masculine, and androgynous.

  6. Cloud Computing and Internet of Things Concepts Applied on Buildings Data Analysis

    Directory of Open Access Journals (Sweden)

    Hebean Florin-Adrian

    2017-12-01

    Full Text Available Used and developed initially for the IT industry, the Cloud computing and Internet of Things concepts are found at this moment in a lot of sectors of activity, building industry being one of them. These are defined like a global computing, monitoring and analyze network, which is composed of hardware and software resources, with the feature of allocating and dynamically relocating the shared resources, in accordance with user requirements. Data analysis and process optimization techniques based on these new concepts are used increasingly more in the buildings industry area, especially for an optimal operations of the buildings installations and also for increasing occupants comfort. The multitude of building data taken from HVAC sensor, from automation and control systems and from the other systems connected to the network are optimally managed by these new analysis techniques. Through analysis techniques can be identified and manage the issues the arise in operation of building installations like critical alarms, nonfunctional equipment, issues regarding the occupants comfort, for example the upper and lower temperature deviation to the set point and other issues related to equipment maintenance. In this study, a new approach regarding building control is presented and also a generalized methodology for applying data analysis to building services data is described. This methodology is then demonstrated using two case studies.

  7. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  8. Content-specific network analysis of peer-to-peer communication in an online community for smoking cessation.

    Science.gov (United States)

    Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor

    2016-01-01

    Analysis of user interactions in online communities could improve our understanding of health-related behaviors and inform the design of technological solutions that support behavior change. However, to achieve this we would need methods that provide granular perspective, yet are scalable. In this paper, we present a methodology for high-throughput semantic and network analysis of large social media datasets, combining semi-automated text categorization with social network analytics. We apply this method to derive content-specific network visualizations of 16,492 user interactions in an online community for smoking cessation. Performance of the categorization system was reasonable (average F-measure of 0.74, with system-rater reliability approaching rater-rater reliability). The resulting semantically specific network analysis of user interactions reveals content- and behavior-specific network topologies. Implications for socio-behavioral health and wellness platforms are also discussed.

  9. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  10. Monitoring nuclear reactor systems using neural networks and fuzzy logic

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Tsoukalas, L.H.; Uhrig, R.E.; Mullens, J.A.

    1991-01-01

    A new approach is presented that demonstrates the potential of trained artificial neural networks (ANNs) as generators of membership functions for the purpose of monitoring nuclear reactor systems. ANN's provide a complex-to-simple mapping of reactor parameters in a process analogous to that of measurement. Through such ''virtual measurements'' the value of parameters with operational significance, e.g., control-valve-disk-position, valve-line-up or performance can be determined. In the methodology presented the output of a virtual measuring device is a set of membership functions which independently represent different states of the system. Utilizing a fuzzy logic representation offers the advantage of describing the state of the system in a condensed form, developed through linguistic descriptions and convenient for application in monitoring, diagnostics and generally control algorithms. The developed methodology is applied to the problem of measuring the disk position of the secondary flow control valve of an experimental reactor using data obtained during a start-up. The enhanced noise tolerance of the methodology is clearly demonstrated as well as a method for selecting the actual output. The results suggest that it is possible to construct virtual measuring devices through artificial neural networks mapping dynamic time series to a set of membership functions and thus enhance the capability of monitoring systems. 8 refs., 11 figs., 1 tab

  11. A modified eco-efficiency framework and methodology for advancing the state of practice of sustainability analysis as applied to green infrastructure.

    Science.gov (United States)

    Ghimire, Santosh R; Johnston, John M

    2017-09-01

    We propose a modified eco-efficiency (EE) framework and novel sustainability analysis methodology for green infrastructure (GI) practices used in water resource management. Green infrastructure practices such as rainwater harvesting (RWH), rain gardens, porous pavements, and green roofs are emerging as viable strategies for climate change adaptation. The modified framework includes 4 economic, 11 environmental, and 3 social indicators. Using 6 indicators from the framework, at least 1 from each dimension of sustainability, we demonstrate the methodology to analyze RWH designs. We use life cycle assessment and life cycle cost assessment to calculate the sustainability indicators of 20 design configurations as Decision Management Objectives (DMOs). Five DMOs emerged as relatively more sustainable along the EE analysis Tradeoff Line, and we used Data Envelopment Analysis (DEA), a widely applied statistical approach, to quantify the modified EE measures as DMO sustainability scores. We also addressed the subjectivity and sensitivity analysis requirements of sustainability analysis, and we evaluated the performance of 10 weighting schemes that included classical DEA, equal weights, National Institute of Standards and Technology's stakeholder panel, Eco-Indicator 99, Sustainable Society Foundation's Sustainable Society Index, and 5 derived schemes. We improved upon classical DEA by applying the weighting schemes to identify sustainability scores that ranged from 0.18 to 1.0, avoiding the nonuniqueness problem and revealing the least to most sustainable DMOs. Our methodology provides a more comprehensive view of water resource management and is generally applicable to GI and industrial, environmental, and engineered systems to explore the sustainability space of alternative design configurations. Integr Environ Assess Manag 2017;13:821-831. Published 2017. This article is a US Government work and is in the public domain in the USA. Integrated Environmental Assessment and

  12. Applying a rateless code in content delivery networks

    Science.gov (United States)

    Suherman; Zarlis, Muhammad; Parulian Sitorus, Sahat; Al-Akaidi, Marwan

    2017-09-01

    Content delivery network (CDN) allows internet providers to locate their services, to map their coverage into networks without necessarily to own them. CDN is part of the current internet infrastructures, supporting multi server applications especially social media. Various works have been proposed to improve CDN performances. Since accesses on social media servers tend to be short but frequent, providing redundant to the transmitted packets to ensure lost packets not degrade the information integrity may improve service performances. This paper examines the implementation of rateless code in the CDN infrastructure. The NS-2 evaluations show that rateless code is able to reduce packet loss up to 50%.

  13. Services for People Innovation Park – Planning Methodologies

    OpenAIRE

    Maria Angela Campelo de Melo; Lygia Magalhães Magacho

    2013-01-01

    This article aims to identify appropriate methodologies for the planning of a Services for People Innovation Park-SPIP, designed according to the model proposed by the Ibero-American Network launched by La Salle University of Madrid. Projected to form a network, these parks were conceived to provoke social change in their region, improving quality of life and social welfare, through knowledge, technology and innovation transfer and creation of companies focused on developing product and servi...

  14. The Methodology Applied in DPPH, ABTS and Folin-Ciocalteau Assays Has a Large Influence on the Determined Antioxidant Potential.

    Science.gov (United States)

    Abramovič, Helena; Grobin, Blaž; Poklar, Nataša; Cigić, Blaž

    2017-06-01

    Antioxidant potential (AOP) is not only the property of the matrix analyzed but also depends greatly on the methodology used. The chromogenic radicals 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS•+), 2,2-diphenyl-1-picrylhydrazyl (DPPH•) and Folin-Ciocalteu (FC) assay were applied to estimate how the method and the composition of the assay solvent influence the AOP determined for coffee, tea, beer, apple juice and dietary supplements. Large differences between the AOP values depending on the reaction medium were observed, with the highest AOP determined mostly in the FC assay. In reactions with chromogenic radicals several fold higher values of AOP were obtained in buffer pH 7.4 than in water or methanol. The type of assay and solvent composition have similar influences on the reactivity of a particular antioxidant, either pure or as part of a complex matrix. The reaction kinetics of radicals with antioxidants in samples reveals that AOP depends strongly on incubation time, yet differently for each sample analyzed and the assay applied.

  15. Multi nodal load forecasting in electric power systems using a radial basis neural network; Previsao de carga multinodal em sistemas eletricos de potencia usando uma rede neural de base radial

    Energy Technology Data Exchange (ETDEWEB)

    Altran, A.B.; Lotufo, A.D.P.; Minussi, C.R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Dept. de Engenharia Eletrica], Emails: lealtran@yahoo.com.br, annadiva@dee.feis.unesp.br, minussi@dee.feis.unesp.br; Lopes, M.L.M. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil). Dept. de Matematica], E-mail: mara@mat.feis.unesp.br

    2009-07-01

    This paper presents a methodology for electrical load forecasting, using radial base functions as activation function in artificial neural networks with the training by backpropagation algorithm. This methodology is applied to short term electrical load forecasting (24 h ahead). Therefore, results are presented analyzing the use of radial base functions substituting the sigmoid function as activation function in multilayer perceptron neural networks. However, the main contribution of this paper is the proposal of a new formulation of load forecasting dedicated to the forecasting in several points of the electrical network, as well as considering several types of users (residential, commercial, industrial). It deals with the MLF (Multimodal Load Forecasting), with the same processing time as the GLF (Global Load Forecasting). (author)

  16. Applying neural networks to control the TFTR neutral beam ion sources

    International Nuclear Information System (INIS)

    Lagin, L.

    1992-01-01

    This paper describes the application of neural networks to the control of the neutral beam long-pulse positive ion source accelerators on the Tokamak Fusion Test Reactor (TFTR) at Princeton University. Neural networks were used to learn how the operators adjust the control setpoints when running these sources. The data sets used to train these networks were derived from a large database containing actual setpoints and power supply waveform calculations for the 1990 run period. The networks learned what the optimum control setpoints should initially be set based uon desired accel voltage and perveance levels. Neural networks were also used to predict the divergence of the ion beam

  17. 'Intelligent' triggering methodology for improved detectability of wavelength modulation diode laser absorption spectrometry applied to window-equipped graphite furnaces

    International Nuclear Information System (INIS)

    Gustafsson, Joergen; Axner, Ove

    2003-01-01

    The wavelength modulation-diode laser absorption spectrometry (WM-DLAS) technique experiences a limited detectability when window-equipped sample compartments are used because of multiple reflections between components in the optical system (so-called etalon effects). The problem is particularly severe when the technique is used with a window-equipped graphite furnace (GF) as atomizer since the heating of the furnace induces drifts of the thickness of the windows and thereby also of the background signals. This paper presents a new detection methodology for WM-DLAS applied to a window-equipped GF in which the influence of the background signals from the windows is significantly reduced. The new technique, which is based upon a finding that the WM-DLAS background signals from a window-equipped GF are reproducible over a considerable period of time, consists of a novel 'intelligent' triggering procedure in which the GF is triggered at a user-chosen 'position' in the reproducible drift-cycle of the WM-DLAS background signal. The new methodology makes also use of 'higher-than-normal' detection harmonics, i.e. 4f or 6f, since these previously have shown to have a higher signal-to-background ratio than 2f-detection when the background signals originates from thin etalons. The results show that this new combined background-drift-reducing methodology improves the limit of detection of the WM-DLAS technique used with a window-equipped GF by several orders of magnitude as compared to ordinary 2f-detection, resulting in a limit of detection for a window-equipped GF that is similar to that of an open GF

  18. Fast frequency hopping codes applied to SAC optical CDMA network

    Science.gov (United States)

    Tseng, Shin-Pin

    2015-06-01

    This study designed a fast frequency hopping (FFH) code family suitable for application in spectral-amplitude-coding (SAC) optical code-division multiple-access (CDMA) networks. The FFH code family can effectively suppress the effects of multiuser interference and had its origin in the frequency hopping code family. Additional codes were developed as secure codewords for enhancing the security of the network. In considering the system cost and flexibility, simple optical encoders/decoders using fiber Bragg gratings (FBGs) and a set of optical securers using two arrayed-waveguide grating (AWG) demultiplexers (DeMUXs) were also constructed. Based on a Gaussian approximation, expressions for evaluating the bit error rate (BER) and spectral efficiency (SE) of SAC optical CDMA networks are presented. The results indicated that the proposed SAC optical CDMA network exhibited favorable performance.

  19. Normalization of similarity-based individual brain networks from gray matter MRI and its association with neurodevelopment in infants with intrauterine growth restriction.

    Science.gov (United States)

    Batalle, Dafnis; Muñoz-Moreno, Emma; Figueras, Francesc; Bargallo, Nuria; Eixarch, Elisenda; Gratacos, Eduard

    2013-12-01

    Obtaining individual biomarkers for the prediction of altered neurological outcome is a challenge of modern medicine and neuroscience. Connectomics based on magnetic resonance imaging (MRI) stands as a good candidate to exhaustively extract information from MRI by integrating the information obtained in a few network features that can be used as individual biomarkers of neurological outcome. However, this approach typically requires the use of diffusion and/or functional MRI to extract individual brain networks, which require high acquisition times and present an extreme sensitivity to motion artifacts, critical problems when scanning fetuses and infants. Extraction of individual networks based on morphological similarity from gray matter is a new approach that benefits from the power of graph theory analysis to describe gray matter morphology as a large-scale morphological network from a typical clinical anatomic acquisition such as T1-weighted MRI. In the present paper we propose a methodology to normalize these large-scale morphological networks to a brain network with standardized size based on a parcellation scheme. The proposed methodology was applied to reconstruct individual brain networks of 63 one-year-old infants, 41 infants with intrauterine growth restriction (IUGR) and 22 controls, showing altered network features in the IUGR group, and their association with neurodevelopmental outcome at two years of age by means of ordinal regression analysis of the network features obtained with Bayley Scale for Infant and Toddler Development, third edition. Although it must be more widely assessed, this methodology stands as a good candidate for the development of biomarkers for altered neurodevelopment in the pediatric population. © 2013 Elsevier Inc. All rights reserved.

  20. The use of hierarchical clustering for the design of optimized monitoring networks

    Science.gov (United States)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different

  1. Networks as systems.

    Science.gov (United States)

    Best, Allan; Berland, Alex; Greenhalgh, Trisha; Bourgeault, Ivy L; Saul, Jessie E; Barker, Brittany

    2018-03-19

    Purpose The purpose of this paper is to present a case study of the World Health Organization's Global Healthcare Workforce Alliance (GHWA). Based on a commissioned evaluation of GHWA, it applies network theory and key concepts from systems thinking to explore network emergence, effectiveness, and evolution to over a ten-year period. The research was designed to provide high-level strategic guidance for further evolution of global governance in human resources for health (HRH). Design/methodology/approach Methods included a review of published literature on HRH governance and current practice in the field and an in-depth case study whose main data sources were relevant GHWA background documents and key informant interviews with GHWA leaders, staff, and stakeholders. Sampling was purposive and at a senior level, focusing on board members, executive directors, funders, and academics. Data were analyzed thematically with reference to systems theory and Shiffman's theory of network development. Findings Five key lessons emerged: effective management and leadership are critical; networks need to balance "tight" and "loose" approaches to their structure and processes; an active communication strategy is key to create and maintain support; the goals, priorities, and membership must be carefully focused; and the network needs to support shared measurement of progress on agreed-upon goals. Shiffman's middle-range network theory is a useful tool when guided by the principles of complex systems that illuminate dynamic situations and shifting interests as global alliances evolve. Research limitations/implications This study was implemented at the end of the ten-year funding cycle. A more continuous evaluation throughout the term would have provided richer understanding of issues. Experience and perspectives at the country level were not assessed. Practical implications Design and management of large, complex networks requires ongoing attention to key issues like leadership

  2. Methodology for ranking restoration options

    DEFF Research Database (Denmark)

    Jensen, Per Hedemann

    1999-01-01

    techniques as a function of contamination and site characteristics. The project includes analyses of existing remediation methodologies and contaminated sites, and is structured in the following steps:-characterisation of relevant contaminated sites -identication and characterisation of relevant restoration...... techniques -assessment of the radiological impact -development and application of a selection methodology for restoration options -formulation ofgeneric conclusions and development of a manual The project is intended to apply to situations in which sites with nuclear installations have been contaminated...

  3. Methodology for Designing Operational Banking Risks Monitoring System

    Science.gov (United States)

    Kostjunina, T. N.

    2018-05-01

    The research looks at principles of designing an information system for monitoring operational banking risks. A proposed design methodology enables one to automate processes of collecting data on information security incidents in the banking network, serving as the basis for an integrated approach to the creation of an operational risk management system. The system can operate remotely ensuring tracking and forecasting of various operational events in the bank network. A structure of a content management system is described.

  4. Evaluation and testing methodology for evolving entertainment systems

    NARCIS (Netherlands)

    Jurgelionis, A.; Bellotti, F.; IJsselsteijn, W.A.; Kort, de Y.A.W.; Bernhaupt, R.; Tscheligi, M.

    2007-01-01

    This paper presents a testing and evaluation methodology for evolving pervasive gaming and multimedia systems. We introduce the Games@Large system, a complex gaming and multimedia architecture comprised of a multitude of elements: heterogeneous end user devices, wireless and wired network

  5. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  6. Monitoring nuclear reactor systems using neural networks and fuzzy logic

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Tsoukalas, L.H.; Uhrig, R.E.; Mullens, J.A.

    1992-01-01

    A new approach is presented that demonstrates the potential of trained artificial neural networks (ANNs) as generators of membership functions for the purpose of monitoring nuclear reactor systems. ANN's provide a complex-to-simple mapping of reactor parameters in a process analogous to that of measurement. Through such virtual measurements the value of parameters with operational significance, e.g., control-valve-disk-position, valve-line-up-or performance can be determined. In the methodology presented the output of virtual measuring device is a set of membership functions which independently represent different states of the system. Utilizing a fuzzy logic representation offers the advantage of describing the state of the system in a condensed form, developed through linguistic descriptions and convenient for application in monitoring, diagnostics and generally control algorithms. The developed methodology is applied to the problem of measuring the disk position of the secondary flow control is clearly demonstrated as well as a method for selecting the actual output. The results suggest that it is possible to construct virtual measuring devices through artificial neural networks mapping dynamic time series to a set of membership functions and thus enhance the capability of monitoring systems

  7. Application opportunities of agile methodology in service company management

    OpenAIRE

    Barauskienė, Diana

    2017-01-01

    Application Opportunities of Agile Methodology in Service Company Management. The main purpose of this master thesis is to identify which methods (or their modified versions) of Agile methodology can be applied in service company management. This master thesis consists of these parts – literature scientific analysis, author’s research methodology (research methods, authors’ research model, essential elements used in the research of application of Agile methodology), research itself (prelimina...

  8. A DNA-Inspired Encryption Methodology for Secure, Mobile Ad Hoc Networks

    Science.gov (United States)

    Shaw, Harry

    2012-01-01

    Users are pushing for greater physical mobility with their network and Internet access. Mobile ad hoc networks (MANET) can provide an efficient mobile network architecture, but security is a key concern. A figure summarizes differences in the state of network security for MANET and fixed networks. MANETs require the ability to distinguish trusted peers, and tolerate the ingress/egress of nodes on an unscheduled basis. Because the networks by their very nature are mobile and self-organizing, use of a Public Key Infra structure (PKI), X.509 certificates, RSA, and nonce ex changes becomes problematic if the ideal of MANET is to be achieved. Molecular biology models such as DNA evolution can provide a basis for a proprietary security architecture that achieves high degrees of diffusion and confusion, and resistance to cryptanalysis. A proprietary encryption mechanism was developed that uses the principles of DNA replication and steganography (hidden word cryptography) for confidentiality and authentication. The foundation of the approach includes organization of coded words and messages using base pairs organized into genes, an expandable genome consisting of DNA-based chromosome keys, and a DNA-based message encoding, replication, and evolution and fitness. In evolutionary computing, a fitness algorithm determines whether candidate solutions, in this case encrypted messages, are sufficiently encrypted to be transmitted. The technology provides a mechanism for confidential electronic traffic over a MANET without a PKI for authenticating users.

  9. Applying agent-based control to mitigate overvoltage in distribution network

    NARCIS (Netherlands)

    Viyathukattuva Mohamed Ali, M.M.; Nguyen, P.H.; Kling, W.L.

    2014-01-01

    Increasing share of distributed renewable energy sources (DRES) in distribution networks raises new operational and power quality challenges for network operators such as handling overvoltage. Such technical constraint limits further penetration of DRES in low voltage (LV) and medium voltage (MV)

  10. Analysis Planning Methodology: For Thesis, Joint Applied Project, & MBA Research Reports

    OpenAIRE

    Naegle, Brad R.

    2010-01-01

    Acquisition Research Handbook Series Purpose: This guide provides the graduate student researcher—you—with techniques and advice on creating an effective analysis plan, and it provides methods for focusing the data-collection effort based on that analysis plan. As a side benefit, this analysis planning methodology will help you to properly scope the research effort and will provide you with insight for changes in that effort. The information presented herein was supported b...

  11. Information theory perspective on network robustness

    International Nuclear Information System (INIS)

    Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

    2016-01-01

    A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology. - Highlights: • A novel methodology to measure the robustness of a network to component failure or targeted attacks is proposed. • The use of the network's distance PDF allows a precise analysis. • The method provides a dynamic robustness profile showing the response of the topology to each failure event. • The measure is capable to detect network's critical elements.

  12. Topology of Innovation Spaces in the Knowledge Networks Emerging through Questions-And-Answers

    Science.gov (United States)

    Andjelković, Miroslav; Tadić, Bosiljka; Mitrović Dankulov, Marija; Rajković, Milan; Melnik, Roderick

    2016-01-01

    The communication processes of knowledge creation represent a particular class of human dynamics where the expertise of individuals plays a substantial role, thus offering a unique possibility to study the structure of knowledge networks from online data. Here, we use the empirical evidence from questions-and-answers in mathematics to analyse the emergence of the network of knowledge contents (or tags) as the individual experts use them in the process. After removing extra edges from the network-associated graph, we apply the methods of algebraic topology of graphs to examine the structure of higher-order combinatorial spaces in networks for four consecutive time intervals. We find that the ranking distributions of the suitably scaled topological dimensions of nodes fall into a unique curve for all time intervals and filtering levels, suggesting a robust architecture of knowledge networks. Moreover, these networks preserve the logical structure of knowledge within emergent communities of nodes, labeled according to a standard mathematical classification scheme. Further, we investigate the appearance of new contents over time and their innovative combinations, which expand the knowledge network. In each network, we identify an innovation channel as a subgraph of triangles and larger simplices to which new tags attach. Our results show that the increasing topological complexity of the innovation channels contributes to network’s architecture over different time periods, and is consistent with temporal correlations of the occurrence of new tags. The methodology applies to a wide class of data with the suitable temporal resolution and clearly identified knowledge-content units. PMID:27171149

  13. Teaching experience in university students using social networks

    Directory of Open Access Journals (Sweden)

    María del Rocío Carranza Alcántar

    2016-11-01

    Full Text Available Social networks, specifically Facebook and Twitter, are currently one of the mainstream media in the world, yet its educational use for the dissemination of knowledge is not significantly evident, under this premise this report is presented, considering an experience in which teachers and university-level students used these networks as mediators of educational practices; such mediation was implemented in order to promote mobile learning as an option to facilitate the process of construction and socialization of knowledge; In this sense, the research presented aimed to identify the experience and opinion of students regarding the influence of this strategy in achieving their learning. The quantitative methodology was applied through the application type of a survey of students who participated and realized the importance of socialization of knowledge. The results showed favorable opinions regarding the use of these networks, highlighting the benefits of mobile learning as a way to streamline the training process. The proposal is to continue this type of strategies to promote flexible teaching-learning options.

  14. Neural network modeling for near wall turbulent flow

    International Nuclear Information System (INIS)

    Milano, Michele; Koumoutsakos, Petros

    2002-01-01

    A neural network methodology is developed in order to reconstruct the near wall field in a turbulent flow by exploiting flow fields provided by direct numerical simulations. The results obtained from the neural network methodology are compared with the results obtained from prediction and reconstruction using proper orthogonal decomposition (POD). Using the property that the POD is equivalent to a specific linear neural network, a nonlinear neural network extension is presented. It is shown that for a relatively small additional computational cost nonlinear neural networks provide us with improved reconstruction and prediction capabilities for the near wall velocity fields. Based on these results advantages and drawbacks of both approaches are discussed with an outlook toward the development of near wall models for turbulence modeling and control

  15. A Methodology to Reduce the Computational Effort in the Evaluation of the Lightning Performance of Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ilaria Bendato

    2016-11-01

    Full Text Available The estimation of the lightning performance of a power distribution network is of great importance to design its protection system against lightning. An accurate evaluation of the number of lightning events that can create dangerous overvoltages requires a huge computational effort, as it implies the adoption of a Monte Carlo procedure. Such a procedure consists of generating many different random lightning events and calculating the corresponding overvoltages. The paper proposes a methodology to deal with the problem in two computationally efficient ways: (i finding out the minimum number of Monte Carlo runs that lead to reliable results; and (ii setting up a procedure that bypasses the lightning field-to-line coupling problem for each Monte Carlo run. The proposed approach is shown to provide results consistent with existing approaches while exhibiting superior Central Processing Unit (CPU time performances.

  16. Improving Learning Outcome Using Six Sigma Methodology

    Science.gov (United States)

    Tetteh, Godson A.

    2015-01-01

    Purpose: The purpose of this research paper is to apply the Six Sigma methodology to identify the attributes of a lecturer that will help improve a student's prior knowledge of a discipline from an initial "x" per cent knowledge to a higher "y" per cent of knowledge. Design/methodology/approach: The data collection method…

  17. Pressurized water reactor fuel rod design methodology

    International Nuclear Information System (INIS)

    Silva, A.T.; Esteves, A.M.

    1988-08-01

    The fuel performance program FRAPCON-1 and the structural finite element program SAP-IV are applied in a pressurized water reactor fuel rod design methodology. The applied calculation procedure allows to dimension the fuel rod components and characterize its internal pressure. (author) [pt

  18. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    Science.gov (United States)

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  19. Methodological Approaches to Locating Outlets of the Franchise Retail Network

    OpenAIRE

    Grygorenko Tetyana M.

    2016-01-01

    Methodical approaches to selecting strategic areas of managing the future location of franchise retail network outlets are presented. The main stages in the assessment of strategic areas of managing the future location of franchise retail network outlets have been determined and the evaluation criteria have been suggested. Since such selection requires consideration of a variety of indicators and directions of the assessment, the author proposes a scale of evaluation, which ...

  20. Optimisation of warpage on thin shell plastic part using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    Science.gov (United States)

    Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.