WorldWideScience

Sample records for network methodology applied

  1. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  2. INTERPERSONAL COMMUNICATION AND METHODOLOGIES OF INNOVATION. A HEURISTIC EXPERIENCE IN THE CLASSROOM APPLYING SEMANTIC NETWORKS

    Directory of Open Access Journals (Sweden)

    José Manuel Corujeira Gómez

    2014-10-01

    Full Text Available The current definition of creativity gives importance to interpersonal communication in innovation strategies, and allows us to question the profiles of professionals –innovation partners– communication skills in the practice session in which they are applied. This text shows shallow results on the application of some of their tactics with a group of students. We tested structural/procedural descriptions of hypothetical effects of communication using indicators proposed by Network Theory in terms topologies provided by the group. Without a conclusive result, we expect this paper helps to the creativity's investigation in the innovation sessions.

  3. Methodology applied to develop the DHIE: applied methodology

    CSIR Research Space (South Africa)

    Herselman, Marlien

    2016-12-01

    Full Text Available This section will address the methodology that was applied to develop the South African Digital Health Innovation Ecosystem (DHIE). Each chapter under Section B represents a specific phase in the methodology....

  4. Artificial intelligence methodologies applied to quality control of the positioning services offered by the Red Andaluza de Posicionamiento (RAP network

    Directory of Open Access Journals (Sweden)

    Antonio José Gil

    2012-12-01

    Full Text Available On April 26, 2012, Elena Giménez de Ory defend-ed her Ph.D. thesis at University of Jaén, entitled: “Robust methodologies applied to quality control of the positioning services offered by the Red Andaluza de Posicionamiento (RAP network”. Elena Giménez de Ory defended her dissertation in a publicly open presentation held in the Higher Polytechnic School at the University of Jaén, and was able to comment on every question raised by her thesis committee and the audience. The thesis was supervised by her advisor, Prof. Antonio J. Gil Cruz, and the rest of his thesis committee, Prof. Manuel Sánchez de la Orden, Dr. Antonio Miguel Ruiz Armenteros and Dr. Gracia Rodríguez Caderot. The thesis has been read and approved by his thesis committee, receiving the highest rating. All of them were present at the presentation.

  5. Computer Network Operations Methodology

    Science.gov (United States)

    2004-03-01

    means of their computer information systems. Disrupt - This type of attack focuses on disrupting as “attackers might surreptitiously reprogram enemy...by reprogramming the computers that control distribution within the power grid. A disruption attack introduces disorder and inhibits the effective...between commanders. The use of methodologies is widespread and done subconsciously to assist individuals in decision making. The processes that

  6. Corporate Social Networks Applied in the Classroom

    Directory of Open Access Journals (Sweden)

    Hugo de Juan-Jordán

    2016-10-01

    This study also tries to propose some guidelines and best practices obtained as a result of the experience of use and the adoption of social networks in class in order to improve the learning process and innovate in the methodology applied to education.

  7. Toward methodological emancipation in applied health research.

    Science.gov (United States)

    Thorne, Sally

    2011-04-01

    In this article, I trace the historical groundings of what have become methodological conventions in the use of qualitative approaches to answer questions arising from the applied health disciplines and advocate an alternative logic more strategically grounded in the epistemological orientations of the professional health disciplines. I argue for an increasing emphasis on the modification of conventional qualitative approaches to the particular knowledge demands of the applied practice domain, challenging the merits of what may have become unwarranted attachment to theorizing. Reorienting our methodological toolkits toward the questions arising within an evidence-dominated policy agenda, I encourage my applied health disciplinary colleagues to make themselves useful to that larger project by illuminating that which quantitative research renders invisible, problematizing the assumptions on which it generates conclusions, and filling in the gaps in knowledge needed to make decisions on behalf of people and populations.

  8. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  9. Team building: conceptual, methodological, and applied considerations.

    Science.gov (United States)

    Beauchamp, Mark R; McEwan, Desmond; Waldhauser, Katrina J

    2017-08-01

    Team building has been identified as an important method of improving the psychological climate in which teams operate, as well as overall team functioning. Within the context of sports, team building interventions have consistently been found to result in improvements in team effectiveness. In this paper we review the extant literature on team building in sport, and address a range of conceptual, methodological, and applied considerations that have the potential to advance theory, research, and applied intervention initiatives within the field. This involves expanding the scope of team building strategies that have, to date, primarily focused on developing group cohesion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Bridging Minds: A Mixed Methodology to Assess Networked Flow.

    Science.gov (United States)

    Galimberti, Carlo; Chirico, Alice; Brivio, Eleonora; Mazzoni, Elvis; Riva, Giuseppe; Milani, Luca; Gaggioli, Andrea

    2015-01-01

    The main goal of this contribution is to present a methodological framework to study Networked Flow, a bio-psycho-social theory of collective creativity applying it on creative processes occurring via a computer network. First, we draw on the definition of Networked Flow to identify the key methodological requirements of this model. Next, we present the rationale of a mixed methodology, which aims at combining qualitative, quantitative and structural analysis of group dynamics to obtain a rich longitudinal dataset. We argue that this integrated strategy holds potential for describing the complex dynamics of creative collaboration, by linking the experiential features of collaborative experience (flow, social presence), with the structural features of collaboration dynamics (network indexes) and the collaboration outcome (the creative product). Finally, we report on our experience with using this methodology in blended collaboration settings (including both face-to-face and virtual meetings), to identify open issues and provide future research directions.

  11. From experience : applying the risk diagnosing methodology

    NARCIS (Netherlands)

    Keizer, J.A.; Halman, J.I.M.; Song, X.M.

    2002-01-01

    No risk, no reward. Companies must take risks to launch new products speedily and successfully. The ability to diagnose and manage risks is increasingly considered of vital importance in high-risk innovation. This article presents the Risk Diagnosing Methodology (RDM), which aims to identify and

  12. From experience: applying the risk diagnosing methodology

    NARCIS (Netherlands)

    Keizer, Jimme A.; Halman, Johannes I.M.; Song, Michael

    2002-01-01

    No risk, no reward. Companies must take risks to launch new products speedily and successfully. The ability to diagnose and manage risks is increasingly considered of vital importance in high-risk innovation. This article presents the Risk Diagnosing Methodology (RDM), which aims to identify and

  13. Methodological exploratory study applied to occupational epidemiology

    Energy Technology Data Exchange (ETDEWEB)

    Carneiro, Janete C.G. Gaburo; Vasques, MOnica Heloisa B.; Fontinele, Ricardo S.; Sordi, Gian Maria A. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)]. E-mail: janetegc@ipen.br

    2007-07-01

    The utilization of epidemiologic methods and techniques has been object of practical experimentation and theoretical-methodological reflection in health planning and programming process. Occupational Epidemiology is the study of the causes and prevention of diseases and injuries from exposition and risks in the work environment. In this context, there is no intention to deplete such a complex theme but to deal with basic concepts of Occupational Epidemiology, presenting the main characteristics of the analysis methods used in epidemiology, as investigate the possible determinants of exposition (chemical, physical and biological agents). For this study, the social-demographic profile of the IPEN-CNEN/SP work force was used. The knowledge of this reference population composition is based on sex, age, educational level, marital status and different occupations, aiming to know the relation between the health aggravating factors and these variables. The methodology used refers to a non-experimental research based on a theoretical methodological practice. The work performed has an exploratory character, aiming a later survey of indicators in the health area in order to analyze possible correlations related to epidemiologic issues. (author)

  14. Methodological exploratory study applied to occupational epidemiology

    International Nuclear Information System (INIS)

    Carneiro, Janete C.G. Gaburo; Vasques, MOnica Heloisa B.; Fontinele, Ricardo S.; Sordi, Gian Maria A.

    2007-01-01

    The utilization of epidemiologic methods and techniques has been object of practical experimentation and theoretical-methodological reflection in health planning and programming process. Occupational Epidemiology is the study of the causes and prevention of diseases and injuries from exposition and risks in the work environment. In this context, there is no intention to deplete such a complex theme but to deal with basic concepts of Occupational Epidemiology, presenting the main characteristics of the analysis methods used in epidemiology, as investigate the possible determinants of exposition (chemical, physical and biological agents). For this study, the social-demographic profile of the IPEN-CNEN/SP work force was used. The knowledge of this reference population composition is based on sex, age, educational level, marital status and different occupations, aiming to know the relation between the health aggravating factors and these variables. The methodology used refers to a non-experimental research based on a theoretical methodological practice. The work performed has an exploratory character, aiming a later survey of indicators in the health area in order to analyze possible correlations related to epidemiologic issues. (author)

  15. Actor/Actant-Network Theory as Emerging Methodology for ...

    African Journals Online (AJOL)

    This paper deliberates on actor/actant-network theory (AANT) as methodology for policy research in environmental education (EE). Insights are drawn from work that applied AANT to research environmental policy processes surrounding the formulation and implementation of South Africa's Plastic Bags Regulations of 2003.

  16. Applied data communications and networks

    CERN Document Server

    Buchanan, W

    1996-01-01

    The usage of data communications and computer networks are ever in­ creasing. It is one of the few technological areas which brings benefits to most of the countries and the peoples of the world. Without it many industries could not exist. It is the objective of this book to discuss data communications in a readable form that students and professionals all over the world can understand. As much as possible the text uses dia­ grams to illustrate key points. Most currently available data communications books take their view­ point from either a computer scientists top-down approach or from an electronic engineers bottom-up approach. This book takes a practical ap­ proach and supports it with a theoretical background to create a textbook which can be used by electronic engineers, computer engineers, computer scientists and industry professionals. It discusses most of the current and future key data communications technologies, including: • Data Communications Standards and Models; • Local Area Networks (...

  17. Social network analysis applied to team sports analysis

    CERN Document Server

    Clemente, Filipe Manuel; Mendes, Rui Sousa

    2016-01-01

    Explaining how graph theory and social network analysis can be applied to team sports analysis, This book presents useful approaches, models and methods that can be used to characterise the overall properties of team networks and identify the prominence of each team player. Exploring the different possible network metrics that can be utilised in sports analysis, their possible applications and variances from situation to situation, the respective chapters present an array of illustrative case studies. Identifying the general concepts of social network analysis and network centrality metrics, readers are shown how to generate a methodological protocol for data collection. As such, the book provides a valuable resource for students of the sport sciences, sports engineering, applied computation and the social sciences.

  18. Abductive networks applied to electronic combat

    Science.gov (United States)

    Montgomery, Gerard J.; Hess, Paul; Hwang, Jong S.

    1990-08-01

    A practical approach to dealing with combinatorial decision problems and uncertainties associated with electronic combat through the use of networks of high-level functional elements called abductive networks is presented. It describes the application of the Abductory Induction Mechanism (AIMTM) a supervised inductive learning tool for synthesizing polynomial abductive networks to the electronic combat problem domain. From databases of historical expert-generated or simulated combat engagements AIM can often induce compact and robust network models for making effective real-time electronic combat decisions despite significant uncertainties or a combinatorial explosion of possible situations. The feasibility of applying abductive networks to realize advanced combat decision aiding capabilities was demonstrated by applying AIM to a set of electronic combat simulations. The networks synthesized by AIM generated accurate assessments of the intent lethality and overall risk associated with a variety of simulated threats and produced reasonable estimates of the expected effectiveness of a group of electronic countermeasures for a large number of simulated combat scenarios. This paper presents the application of abductive networks to electronic combat summarizes the results of experiments performed using AIM discusses the benefits and limitations of applying abductive networks to electronic combat and indicates why abductive networks can often result in capabilities not attainable using alternative approaches. 1. ELECTRONIC COMBAT. UNCERTAINTY. AND MACHINE LEARNING Electronic combat has become an essential part of the ability to make war and has become increasingly complex since

  19. A methodology for extracting knowledge rules from artificial neural networks applied to forecast demand for electric power; Uma metodologia para extracao de regras de conhecimento a partir de redes neurais artificiais aplicadas para previsao de demanda por energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Steinmetz, Tarcisio; Souza, Glauber; Ferreira, Sandro; Santos, Jose V. Canto dos; Valiati, Joao [Universidade do Vale do Rio dos Sinos (PIPCA/UNISINOS), Sao Leopoldo, RS (Brazil). Programa de Pos-Graduacao em Computacao Aplicada], Emails: trsteinmetz@unisinos.br, gsouza@unisinos.br, sferreira, jvcanto@unisinos.br, jfvaliati@unisinos.br

    2009-07-01

    We present a methodology for the extraction of rules from Artificial Neural Networks (ANN) trained to forecast the electric load demand. The rules have the ability to express the knowledge regarding the behavior of load demand acquired by the ANN during the training process. The rules are presented to the user in an easy to read format, such as IF premise THEN consequence. Where premise relates to the input data submitted to the ANN (mapped as fuzzy sets), and consequence appears as a linear equation describing the output to be presented by the ANN, should the premise part holds true. Experimentation demonstrates the method's capacity for acquiring and presenting high quality rules from neural networks trained to forecast electric load demand for several amounts of time in the future. (author)

  20. The Methodology of Investigation of Intercultural Rhetoric applied to SFL

    Directory of Open Access Journals (Sweden)

    David Heredero Zorzo

    2016-12-01

    Full Text Available Intercultural rhetoric is a discipline which studies written discourse among individuals from different cultures. It is a very strong field in the Anglo-Saxon scientific world, especially referring to English as a second language, but in Spanish as a foreign language it is not as prominent. Intercultural rhetoric has provided applied linguistics with important methods of investigation, thus applying this to SFL could introduce interesting new perspectives on the subject. In this paper, we present the methodology of investigation of intercultural rhetoric, which is based on the use of different types of corpora for analysing genders, and follows the precepts of tertium comparationis. In addition, it uses techniques of ethnographic investigation. The purpose of this paper is to show the applications of this methodology to SFL and to outline future investigations in the same field.

  1. Digital processing methodology applied to exploring of radiological images

    International Nuclear Information System (INIS)

    Oliveira, Cristiane de Queiroz

    2004-01-01

    In this work, digital image processing is applied as a automatic computational method, aimed for exploring of radiological images. It was developed an automatic routine, from the segmentation and post-processing techniques to the radiology images acquired from an arrangement, consisting of a X-ray tube, target and filter of molybdenum, of 0.4 mm and 0.03 mm, respectively, and CCD detector. The efficiency of the methodology developed is showed in this work, through a case study, where internal injuries in mangoes are automatically detected and monitored. This methodology is a possible tool to be introduced in the post-harvest process in packing houses. A dichotomic test was applied to evaluate a efficiency of the method. The results show a success of 87.7% to correct diagnosis and 12.3% to failures to correct diagnosis with a sensibility of 93% and specificity of 80%. (author)

  2. Analytical Chemistry as Methodology in Modern Pure and Applied Chemistry

    OpenAIRE

    Honjo, Takaharu

    2001-01-01

    Analytical chemistry is an indispensable methodology in pure and applied chemistry, which is often compared to a foundation stone of architecture. In the home page of jsac, it is said that analytical chemistry is a learning of basic science, which treats the development of method in order to get usefull chemical information of materials by means of detection, separation, and characterization. Analytical chemistry has recently developed into analytical sciences, which treats not only analysis ...

  3. Benford's Law Applies to Online Social Networks.

    Science.gov (United States)

    Golbeck, Jennifer

    2015-01-01

    Benford's Law states that, in naturally occurring systems, the frequency of numbers' first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30% of the time, and are six times more common than numbers beginning with a 9. We show that Benford's Law applies to social and behavioral features of users in online social networks. Using social data from five major social networks (Facebook, Twitter, Google Plus, Pinterest, and LiveJournal), we show that the distribution of first significant digits of friend and follower counts for users in these systems follow Benford's Law. The same is true for the number of posts users make. We extend this to egocentric networks, showing that friend counts among the people in an individual's social network also follows the expected distribution. We discuss how this can be used to detect suspicious or fraudulent activity online and to validate datasets.

  4. Benford's Law Applies to Online Social Networks.

    Directory of Open Access Journals (Sweden)

    Jennifer Golbeck

    Full Text Available Benford's Law states that, in naturally occurring systems, the frequency of numbers' first digits is not evenly distributed. Numbers beginning with a 1 occur roughly 30% of the time, and are six times more common than numbers beginning with a 9. We show that Benford's Law applies to social and behavioral features of users in online social networks. Using social data from five major social networks (Facebook, Twitter, Google Plus, Pinterest, and LiveJournal, we show that the distribution of first significant digits of friend and follower counts for users in these systems follow Benford's Law. The same is true for the number of posts users make. We extend this to egocentric networks, showing that friend counts among the people in an individual's social network also follows the expected distribution. We discuss how this can be used to detect suspicious or fraudulent activity online and to validate datasets.

  5. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  6. Networks as integrated in research methodologies in PER

    DEFF Research Database (Denmark)

    Bruun, Jesper

    2016-01-01

    of using networks to create insightful maps of learning discussions. To conclude, I argue that conceptual blending is a powerful framework for constructing "mixed methods" methodologies that may integrate diverse theories and other methodologies with network methodologies.......In recent years a number of researchers within the PER community have started using network analysis as a new methodology to extend our understanding of teaching and learning physics by viewing these as complex systems. In this paper, I give examples of social, cognitive, and action mapping...... networks and how they can be analyzed. In so doing I show how a network can be methodologically described as a set of relations between a set of entities, and how a network can be characterized and analyzed as a mathematical object. Then, as an illustrative example, I discuss a relatively new example...

  7. 3-D SURVEY APPLIED TO INDUSTRIAL ARCHAEOLOGY BY TLS METHODOLOGY

    Directory of Open Access Journals (Sweden)

    M. Monego

    2017-05-01

    Full Text Available This work describes the three-dimensional survey of “Ex Stazione Frigorifera Specializzata”: initially used for agricultural storage, during the years it was allocated to different uses until the complete neglect. The historical relevance and the architectural heritage that this building represents has brought the start of a recent renovation project and functional restoration. In this regard it was necessary a global 3-D survey that was based on the application and integration of different geomatic methodologies (mainly terrestrial laser scanner, classical topography, and GNSS. The acquisitions of point clouds was performed using different laser scanners: with time of flight (TOF and phase shift technologies for the distance measurements. The topographic reference network, needed for scans alignment in the same system, was measured with a total station. For the complete survey of the building, 122 scans were acquired and 346 targets were measured from 79 vertices of the reference network. Moreover, 3 vertices were measured with GNSS methodology in order to georeference the network. For the detail survey of machine room were executed 14 scans with 23 targets. The 3-D global model of the building have less than one centimeter of error in the alignment (for the machine room the error in alignment is not greater than 6 mm and was used to extract products such as longitudinal and transversal sections, plans, architectural perspectives, virtual scans. A complete spatial knowledge of the building is obtained from the processed data, providing basic information for restoration project, structural analysis, industrial and architectural heritage valorization.

  8. GMDH and neural networks applied in temperature sensors monitoring

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez; Silva, Antonio Teixeira e

    2009-01-01

    In this work a monitoring system was developed based on the Group Method of Data Handling (GMDH) and Neural Networks (ANNs) methodologies. This methodology was applied to the IEA-R1 research reactor at IPEN by using a database obtained from a theoretical model of the reactor. The IEA-R1 research reactor is a pool type reactor of 5 MW, cooled and moderated by light water, and uses graphite and beryllium as reflector. The theoretical model was developed using the Matlab GUIDE toolbox. The equations are based in the IEA-R1 mass and energy inventory balance and physical as well as operational aspects are taken into consideration. This methodology was developed by using the GMDH algorithm as input variables to the ANNs. The results obtained using the GMDH and ANNs were better than that obtained using only ANNs. (author)

  9. Tools and methodologies applied to eLearning

    OpenAIRE

    Seoane Pardo, Antonio M.; García-Peñalvo, Francisco José

    2006-01-01

    The aim of this paper is to show how eLearning technologies and methodologies should be useful for teaching and researching Logic. Firstly, a definition and explanation of eLearning and its main modalities will be given. Then, the most important elements and tools of eLearning activities will be shown. Finally, we will give three suggestions to improve learning experience with eLearning applied to Logic. Se muestran diversas tecnologías y metodologías de e-learning útiles en la enseñanza e...

  10. Methodological aspects of network assets accounting

    Directory of Open Access Journals (Sweden)

    Yuhimenko-Nazaruk I.A.

    2017-08-01

    Full Text Available The necessity of using innovative tools of processing and representation of information about network assets is substantiated. The suggestions for displaying network assets in accounts are presented. The main reasons for the need to display the network assets in the financial statements of all members of the network structure (the economic essence of network assets as the object of accounting; the non-additional model for the formation of the value of network assets; the internetworking mechanism for the formation of the value of network assets are identified. The stages of accounting valuation of network assets are allocated and substantiated. The analytical table for estimating the value of network assets and additional network capital in accounting is developed. The order of additional network capital reflection in accounting is developed. The method of revaluation of network assets in accounting in the broad sense is revealed. The order of accounting of network assets with increasing or decreasing the number of participants in the network structure is determined.

  11. Applying of component system development in object methodology, case study

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    Full Text Available To create computarization target software as a component system has been a very strong requirement for the last 20 years of software developing. Finally, the architectural components are self-contained units, presenting not only partial and overall system behavior, but also cooperating with each other on the basis of their interfaces. Among others, components have allowed flexible modification of processes the behavior of which is the foundation of components behavior without changing the life of the component system. On the other hand, the component system makes it possible, at design time, to create numerous new connections between components and thus creating modified system behaviors. This all enables the company management to perform, at design time, required behavioral changes of processes in accordance with the requirements of changing production and market.The development of software which is generally referred to as SDP (Software Development Process contains two directions. The first one, called CBD (Component–Based Development, is dedicated to the development of component–based systems CBS (Component–based System, the second target is the development of software under the influence of SOA (Service–Oriented Architecture. Both directions are equipped with their different development methodologies. The subject of this paper is only the first direction and application of development of component–based systems in its object–oriented methodologies. The requirement of today is to carry out the development of component-based systems in the framework of developed object–oriented methodologies precisely in the way of a dominant style. In some of the known methodologies, however, this development is not completely transparent and is not even recognized as dominant. In some cases, it is corrected by the special meta–integration models of component system development into an object methodology.This paper presents a case study

  12. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    Science.gov (United States)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  13. Conceptual and methodological biases in network models.

    Science.gov (United States)

    Lamm, Ehud

    2009-10-01

    Many natural and biological phenomena can be depicted as networks. Theoretical and empirical analyses of networks have become prevalent. I discuss theoretical biases involved in the delineation of biological networks. The network perspective is shown to dissolve the distinction between regulatory architecture and regulatory state, consistent with the theoretical impossibility of distinguishing a priori between "program" and "data." The evolutionary significance of the dynamics of trans-generational and interorganism regulatory networks is explored and implications are presented for understanding the evolution of the biological categories development-heredity, plasticity-evolvability, and epigenetic-genetic.

  14. Energy retrofit of commercial buildings. Case study and applied methodology

    Energy Technology Data Exchange (ETDEWEB)

    Aste, N.; Del Pero, C. [Department of Building Environment Science and Technology (BEST), Politecnico di Milano, Via Bonardi 3, 20133 Milan (Italy)

    2013-05-15

    Commercial buildings are responsible for a significant share of the energy requirements of European Union countries. Related consumptions due to heating, cooling, and lighting appear, in most cases, very high and expensive. Since the real estate is renewed with a very small percentage each year and current trends suggest reusing the old structures, strategies for improving energy efficiency and sustainability should focus not only on new buildings, but also and especially on existing ones. Architectural renovation of existing buildings could provide an opportunity to enhance their energy efficiency, by working on the improvement of envelopes and energy supply systems. It has also to be noted that the measures aimed to improve the energy performance of buildings should pay particular attention to the cost-effectiveness of the interventions. In general, there is a lack of well-established methods for retrofitting, but if a case study achieves effective results, the adopted strategies and methodologies can be successfully replicated for similar kinds of buildings. In this paper, an iterative methodology for energy retrofit of commercial buildings is presented, together with a specific application on an existing office building. The case study is particularly significant as it is placed in an urban climatic context characterized by cold winters and hot summers; consequently, HVAC energy consumption is considerable throughout the year. The analysis and simulations of energy performance before and after the intervention, along with measured data on real energy performance, demonstrate the validity of the applied approach. The specifically developed design and refurbishment methodology, presented in this work, could be also assumed as a reference in similar operations.

  15. Methodology for uranium compounds characterization applied to biomedical monitoring

    International Nuclear Information System (INIS)

    Ansoborlo, E.; Chalabreysse, J.; Henge-Napoli, M.H.; Pujol, E.

    1991-01-01

    Chronic exposure and accidental contamination to uranium compounds in the nuclear industry, led the authors to develop a methodology in order to characterize those compounds applied to biomedical monitoring. Such a methodology, based on the recommendation of the ICRP and the assessment of Annual Limit on Intake (ALI) values, involves two main steps: (1) The characterization of the industrial compound, i.e. its physico-chemical properties like density (g cm -3 ), specific area (m 2 g -1 ), x-ray spectrum (crystalline form), solid infrared spectrum (wavelength and bounds), mass spectrometry (isotopic composition), and particle size distribution including measurement of the Activity Median Aerodynamic Diameter (AMAD). They'll specially study aging and hydration state of some compounds. (2) The study of in vitro solubility in several biochemical medium like bicarbonates, Basal Medium Eagle (BME) used in cellular culture, Gamble solvent, which is a serum simulant, with oxygen bubbling, and Gamble added with superoxide anions O2 - . Those different mediums allow one to understand the dissolution mechanisms (oxidation, chelating effects...) and to give ICRP classification D, W, or Y. Those two steps are essential to assess a biomedical monitoring either in routine or accidental exposure, and to calculate the ALI. Results on UO3, UF4 and U02 in the French uranium industry are given

  16. Framework for applying RI-ISI methodology for Indian PHWRs

    International Nuclear Information System (INIS)

    Vinod, Gopika; Saraf, R.K.; Ghosh, A.K.; Kushwaha, H.S.

    2006-01-01

    Risk Informed In-Service Inspection (RI-ISI) aims at categorizing the components for In-Service inspection based on their contribution to Risk. For defining the contribution of risk from components, their failure probabilities and its subsequent effect on Core Damage Frequency (CDF) needs to be evaluated using Probabilistic Safety Assessment methodology. During the last several years, both the U.S. Nuclear Regulatory Commission (NRC) and the nuclear industry have recognized that Probabilistic Safety Assessment (PSA) has evolved to be more useful in supplementing traditional engineering approaches in reactor regulation. The paper highlights the various stages involved in applying RI-ISI and then compares the findings with existing ISI practices. (author)

  17. METHODOLOGY FOR GENERATION OF CORPORATE NETWORK HOSTNAME

    OpenAIRE

    Garrigós, Allan Mac Quinn; Sassi, Renato José

    2011-01-01

    The general concept of corporate network is made up of two or more interconnected computers sharing information, for the right functionality of the sharing. the nomenclature of these computers within the network is extremely important for proper organization of the names on Active Directory (AD -Domain Controller) and removing the duplicated names improperly created equal, removing the arrest of communications between machines with the same name on the network. The aim of this study was to de...

  18. Urban Agglomerations in Regional Development: Theoretical, Methodological and Applied Aspects

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Shmidt

    2016-09-01

    Full Text Available The article focuses on the analysis of the major process of modern socio-economic development, such as the functioning of urban agglomerations. A short background of the economic literature on this phenomenon is given. There are the traditional (the concentration of urban types of activities, the grouping of urban settlements by the intensive production and labour communications and modern (cluster theories, theories of network society conceptions. Two methodological principles of studying the agglomeration are emphasized: the principle of the unity of the spatial concentration of economic activity and the principle of compact living of the population. The positive and negative effects of agglomeration in the economic and social spheres are studied. Therefore, it is concluded that the agglomeration is helpful in the case when it brings the agglomerative economy (the positive benefits from it exceed the additional costs. A methodology for examination the urban agglomeration and its role in the regional development is offered. The approbation of this methodology on the example of Chelyabinsk and Chelyabinsk region has allowed to carry out the comparative analysis of the regional centre and the whole region by the main socio-economic indexes under static and dynamic conditions, to draw the conclusions on a position of the city and the region based on such socio-economic indexes as an average monthly nominal accrued wage, the cost of fixed assets, the investments into fixed capital, new housing supply, a retail turnover, the volume of self-produced shipped goods, the works and services performed in the region. In the study, the analysis of a launching site of the Chelyabinsk agglomeration is carried out. It has revealed the following main characteristics of the core of the agglomeration in Chelyabinsk (structure feature, population, level of centralization of the core as well as the Chelyabinsk agglomeration in general (coefficient of agglomeration

  19. Hyperspectral and thermal methodologies applied to landslide monitoring

    Science.gov (United States)

    Vellico, Michela; Sterzai, Paolo; Pietrapertosa, Carla; Mora, Paolo; Berti, Matteo; Corsini, Alessandro; Ronchetti, Francesco; Giannini, Luciano; Vaselli, Orlando

    2010-05-01

    Landslide monitoring is a very actual topic. Landslides are a widespread phenomenon over the European territory and these phenomena have been responsible of huge economic losses. The aim of the WISELAND research project (Integrated Airborne and Wireless Sensor Network systems for Landslide Monitoring), funded by the Italian Government, is to test new monitoring techniques capable to rapidly and successfully characterize large landslides in fine soils. Two active earthflows in the Northern Italian Appenines have been chosen as test sites and investigated: Silla (Bologna Province) and Valoria (Modena Province). The project implies the use of remote sensing methodologies, with particular focus on the joint use of airborne Lidar, hyperspectral and thermal systems. These innovative techniques give promising results, since they allow to detect the principal landslide components and to evaluate the spatial distribution of parameters relevant to landslide dynamics such as surface water content and roughness. In this paper we put the attention on the response of the terrain related to the use of a hyperspectral system and its integration with the complementary information obtained using a thermal sensor. The potentiality of a hyperspectral dataset acquired in the VNIR (Visible Near Infrared field) and of the spectral response of the terrain could be high since they give important information both on the soil and on the vegetation status. Several significant indexes can be calculated, such as NDVI, obtained considering a band in the Red field and a band in the Infrared field; it gives information on the vegetation health and indirectly on the water content of soils. This is a key point that bridges hyperspectral and thermal datasets. Thermal infrared data are closely related to soil moisture, one of the most important parameter affecting surface stability in soil slopes. Effective stresses and shear strength in unsaturated soils are directly related to water content, and

  20. A Generic Methodology for Superstructure Optimization of Different Processing Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Frauzem, Rebecca; Zhang, Lei

    2016-01-01

    In this paper, we propose a generic computer-aided methodology for synthesis of different processing networks using superstructure optimization. The methodology can handle different network optimization problems of various application fields. It integrates databases with a common data architecture......, a generic model to represent the processing steps, and appropriate optimization tools. A special software interface has been created to automate the steps in the methodology workflow, allow the transfer of data between tools and obtain the mathematical representation of the problem as required...

  1. Applying of component system development in object methodology

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    -oriented methodology (Arlo, Neust, 2007, (Kan, Müller, 2005, (​​Krutch, 2003 for problem domains with double-layer process logic. There is indicated an integration method, based on a certain meta-model (Applying of the Component system Development in object Methodology and leading to the component system formation. The mentioned meta-model is divided into partial workflows that are located in different stages of a classic object process-based methodology. Into account there are taken the consistency of the input and output artifacts in working practices of the meta-model and mentioned object methodology. This paper focuses on static component systems that are starting to explore dynamic and mobile component systems.In addition, in the contribution the component system is understood as a specific system, for its system properties and basic terms notation being used a set and graph and system algebra.

  2. Applying Statistical Process Quality Control Methodology to Educational Settings.

    Science.gov (United States)

    Blumberg, Carol Joyce

    A subset of Statistical Process Control (SPC) methodology known as Control Charting is introduced. SPC methodology is a collection of graphical and inferential statistics techniques used to study the progress of phenomena over time. The types of control charts covered are the null X (mean), R (Range), X (individual observations), MR (moving…

  3. BAT methodology applied to the construction of new CCNN

    International Nuclear Information System (INIS)

    Vilches Rodriguez, E.; Campos Feito, O.; Gonzalez Delgado, J.

    2012-01-01

    The BAT methodology should be used in all phases of the project, from preliminary studies and design to decommissioning, gaining special importance in radioactive waste management and environmental impact studies. Adequate knowledge of this methodology will streamline the decision making process and to facilitate the relationship with regulators and stake holders.

  4. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  5. Applying a Network-Lens to Hospitality Business Research: A New Research Agenda

    Directory of Open Access Journals (Sweden)

    Florian AUBKE

    2014-06-01

    Full Text Available Hospitality businesses are first and foremost places of social interaction. This paper argues for an inclusion of network methodology into the tool kit of hospitality researchers. This methodology focuses on the interaction of people rather than applying an actor-focused view, which currently seems dominant in hospitality research. Outside the field, a solid research basis has been formed, upon which hospitality researchers can build. The paper introduces the foundations of network theory and its applicability to the study of organizations. A brief methodological introduction is provided and potential applications and research topics relevant to the hospitality field are suggested.

  6. Teaching methodology for modeling reference evapotranspiration with artificial neural networks

    OpenAIRE

    Martí, Pau; Pulido Calvo, Inmaculada; Gutiérrez Estrada, Juan Carlos

    2015-01-01

    [EN] Artificial neural networks are a robust alternative to conventional models for estimating different targets in irrigation engineering, among others, reference evapotranspiration, a key variable for estimating crop water requirements. This paper presents a didactic methodology for introducing students in the application of artificial neural networks for reference evapotranspiration estimation using MatLab c . Apart from learning a specific application of this software wi...

  7. Applied Ontology Engineering in Cloud Services, Networks and Management Systems

    CERN Document Server

    Serrano Orozco, J Martín

    2012-01-01

    Metadata standards in today’s ICT sector are proliferating at unprecedented levels, while automated information management systems collect and process exponentially increasing quantities of data. With interoperability and knowledge exchange identified as a core challenge in the sector, this book examines the role ontology engineering can play in providing solutions to the problems of information interoperability and linked data. At the same time as introducing basic concepts of ontology engineering, the book discusses methodological approaches to formal representation of data and information models, thus facilitating information interoperability between heterogeneous, complex and distributed communication systems. In doing so, the text advocates the advantages of using ontology engineering in telecommunications systems. In addition, it offers a wealth of guidance and best-practice techniques for instances in which ontology engineering is applied in cloud services, computer networks and management systems. �...

  8. On Research Methodology in Applied Linguistics in 2002-2008

    Science.gov (United States)

    Martynychev, Andrey

    2010-01-01

    This dissertation examined the status of data-based research in applied linguistics through an analysis of published research studies in nine peer-reviewed applied linguistics journals ("Applied Language Learning, The Canadian Modern Language Review / La Revue canadienne des langues vivantes, Current Issues in Language Planning, Dialog on Language…

  9. Strategies and methodologies for applied marine radioactivity studies

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The main objective of this document is to provide basic training in the theoretical background and practical applications of the methodologies for the measurement, monitoring and assessment of radioactivity in marine environment. This manual is a compilation of lectures and notes that have been presented at previous training courses. The document contains 16 individual papers, each of them was indexed separately.

  10. Strategies and methodologies for applied marine radioactivity studies

    International Nuclear Information System (INIS)

    1997-01-01

    The main objective of this document is to provide basic training in the theoretical background and practical applications of the methodologies for the measurement, monitoring and assessment of radioactivity in marine environment. This manual is a compilation of lectures and notes that have been presented at previous training courses. The document contains 16 individual papers, each of them was indexed separately

  11. An applied methodology for stakeholder identification in transdisciplinary research

    NARCIS (Netherlands)

    Leventon, Julia; Fleskens, Luuk; Claringbould, Heleen; Schwilch, Gudrun; Hessel, Rudi

    2016-01-01

    In this paper we present a novel methodology for identifying stakeholders for the purpose of engaging with them in transdisciplinary, sustainability research projects. In transdisciplinary research, it is important to identify a range of stakeholders prior to the problem-focussed stages of

  12. Applying living lab methodology to enhance skills in innovation

    CSIR Research Space (South Africa)

    Herselman, M

    2010-07-01

    Full Text Available and which is also inline with the South African medium term strategic framework and the millennium goals of the Department of Science and Technology. Evidence of how the living lab methodology can enhance innovation skills was made clear during various...

  13. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  14. Advances in Artificial Neural NetworksMethodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  15. Risk management methodology applied at thermal power plant

    International Nuclear Information System (INIS)

    Coppolino, R.

    2007-01-01

    Nowadays, the responsibility of the environmental risks, connected the productive processes and to the products of an enterprise, represent one of the main aspects which an adequate management approach has to foresee. In this paper it has been evaluated the guidelines followed by Edipower Thermoelectric Power plant of S. Filippo di Mela (ME). These guidelines were given in order to manage the chemical risk connected to the usage of various chemicals with which the workers get in touch when identifying the risks of the methodology introduced by the AZ/NZS 4360:2004 Risk Management Standard

  16. A robust methodology for modal parameters estimation applied to SHM

    Science.gov (United States)

    Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

    2017-10-01

    The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

  17. Applying a Network-Lens to Hospitality Business Research: A New Research Agenda

    OpenAIRE

    AUBKE, Florian

    2014-01-01

    Hospitality businesses are first and foremost places of social interaction. This paper argues for an inclusion of network methodology into the tool kit of hospitality researchers. This methodology focuses on the interaction of people rather than applying an actor-focused view, which currently seems dominant in hospitality research. Outside the field, a solid research basis has been formed, upon which hospitality researchers can build. The paper introduces the foundations ...

  18. Methodology applied in Cuba for siting, designing, and building a radioactive waste repository under safety conditions

    International Nuclear Information System (INIS)

    Orbera, L.; Peralta, J.L.; Franklin, R.; Gil, R.; Chales, G.; Rodriguez, A.

    1993-01-01

    The work presents the methodology used in Cuba for siting, designing, and building a radioactive waste repository safely. This methodology covers both the technical and socio-economic factors, as well as those of design and construction so as to have a safe siting for this kind of repository under Cuba especial condition. Applying this methodology will results in a safe repository

  19. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  20. New methodologies of biological dosimetry applied to human protection

    International Nuclear Information System (INIS)

    Catena, C.; Parasacchi, P.; Conti, D.; Righi, E.

    1995-04-01

    Biological dosimetry is a diagnostic methodology for the measurement of the individual dose absorbed in the case of accidental overexposition to ionizing radiation. It is demonstrated how in vitro radiobiological and chemobiological studies using cytogenetic methods (count of chromosomal aberrations and micronuclei) on human lymphocytes from healthy subjects and individuals undergoing radiotherapy or chemotherapy, as well as on lymphocytes of mammals other than man (comparative cytogenetics), can help to increase the basic radiobiological and chemobiological scientific information. Such information gives a valid contribution to understanding of the action of ionizing radiation or of pharmaceuticals on cells and, in return, can be of value to human radioprotection and chemoprotection. Cytogenetic studies can be summerized as follows: a) biodosimetry (estimate of dose received after accidental events); b) individual radiosensitivity (level of individual response); c) clinical radiobiology and chemobiology (individual response to radiopharmaceuticals, to radiotherapy and to chemopharmaceuticals); d) comparative radiobiology (cytogenetic studies on species other than man); e) animal model in the environmental surveillance

  1. Spatiotemporal mapping of interictal spike propagation: a novel methodology applied to pediatric intracranial EEG recordings.

    Directory of Open Access Journals (Sweden)

    Samuel Tomlinson

    2016-12-01

    Full Text Available Synchronized cortical activity is implicated in both normative cognitive functioning andmany neurological disorders. For epilepsy patients with intractable seizures, irregular patterns ofsynchronization within the epileptogenic zone (EZ is believed to provide the network substratethrough which seizures initiate and propagate. Mapping the EZ prior to epilepsy surgery is critical fordetecting seizure networks in order to achieve post-surgical seizure control. However, automatedtechniques for characterizing epileptic networks have yet to gain traction in the clinical setting.Recent advances in signal processing and spike detection have made it possible to examine thespatiotemporal propagation of interictal spike discharges across the epileptic cortex. In this study, wepresent a novel methodology for detecting, extracting, and visualizing spike propagation anddemonstrate its potential utility as a biomarker for the epileptogenic zone. Eighteen pre-surgicalintracranial EEG recordings were obtained from pediatric patients ultimately experiencing favorable(i.e., seizure-free, n = 9 or unfavorable (i.e., seizure-persistent, n = 9 surgical outcomes. Novelalgorithms were applied to extract multi-channel spike discharges and visualize their spatiotemporalpropagation. Quantitative analysis of spike propagation was performed using trajectory clusteringand spatial autocorrelation techniques. Comparison of interictal propagation patterns revealed anincrease in trajectory organization (i.e., spatial autocorrelation among Sz-Free patients compared toSz-Persist patients. The pathophysiological basis and clinical implications of these findings areconsidered.

  2. Managing Complex Battlespace Environments Using Attack the Network Methodologies

    DEFF Research Database (Denmark)

    Mitchell, Dr. William L.

    This paper examines the last 8 years of development and application of Attack the Network (AtN) intelligence methodologies for creating shared situational understanding of complex battlespace environment and the development of deliberate targeting frameworks. It will present a short history...... of their development, how they are integrated into operational planning through strategies of deliberate targeting for modern operations. The paper will draw experience and case studies from Iraq, Syria, and Afghanistan and will offer some lessons learned as well as insight into the future of these methodologies....... Including their possible application on a national security level for managing longer strategic endeavors....

  3. Applying neural networks to optimize instrumentation performance

    Energy Technology Data Exchange (ETDEWEB)

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  4. Applying neural networks to optimize instrumentation performance

    International Nuclear Information System (INIS)

    Start, S.E.; Peters, G.G.

    1995-01-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate

  5. Applying Lean Six Sigma methodology to reduce cesarean section rate.

    Science.gov (United States)

    Chai, Ze-Ying; Hu, Hua-Min; Ren, Xiu-Ling; Zeng, Bao-Jin; Zheng, Ling-Zhi; Qi, Feng

    2017-06-01

    This study aims to reduce cesarean section rate and increase rate of vaginal delivery. By using Lean Six Sigma (LSS) methodology, the cesarean section rate was investigated and analyzed through a 5-phase roadmap consisting of Define, Measure, Analyze, Improve, and Control. The principal causes of cesarean section were identified, improvement measures were implemented, and the rate of cesarean section before and after intervention was compared. After patients with a valid medical reason for cesarean were excluded, the main causes of cesarean section were maternal request, labor pain, parturient women assessment, and labor observation. A series of measures was implemented, including an improved parturient women assessment system, strengthened pregnancy nutrition guidance, implementation of painless labor techniques, enhanced midwifery team building, and promotion of childbirth-assist skills. Ten months after introduction of the improvement measures, the cesarean section rate decreased from 41.83% to 32.00%, and the Six Sigma score (ie, Z value) increased from 1.706 to 1.967 (P < .001). LSS is an effective way to reduce the rate of cesarean section. © 2016 John Wiley & Sons, Ltd.

  6. A Network Based Methodology to Reveal Patterns in Knowledge Transfer

    Directory of Open Access Journals (Sweden)

    Orlando López-Cruz

    2015-12-01

    Full Text Available This paper motivates, presents and demonstrates in use a methodology based in complex network analysis to support research aimed at identification of sources in the process of knowledge transfer at the interorganizational level. The importance of this methodology is that it states a unified model to reveal knowledge sharing patterns and to compare results from multiple researches on data from different periods of time and different sectors of the economy. This methodology does not address the underlying statistical processes. To do this, national statistics departments (NSD provide documents and tools at their websites. But this proposal provides a guide to model information inferences gathered from data processing revealing links between sources and recipients of knowledge being transferred and that the recipient detects as main source to new knowledge creation. Some national statistics departments set as objective for these surveys the characterization of innovation dynamics in firms and to analyze the use of public support instruments. From this characterization scholars conduct different researches. Measures of dimensions of the network composed by manufacturing firms and other organizations conform the base to inquiry the structure that emerges from taking ideas from other organizations to incept innovations. These two sets of data are actors of a two- mode-network. The link between two actors (network nodes, one acting as the source of the idea. The second one acting as the destination comes from organizations or events organized by organizations that “provide” ideas to other group of firms. The resulting demonstrated design satisfies the objective of being a methodological model to identify sources in knowledge transfer of knowledge effectively used in innovation.

  7. SEMANTIC NETWORKS: THEORETICAL, TECHNICAL, METHODOLOGIC AND ANALYTICAL ASPECTS

    Directory of Open Access Journals (Sweden)

    José Ángel Vera Noriega

    2005-09-01

    Full Text Available This work is a review of the methodological procedures and cares for the measurement of the connotative meanings which will be used in the elaboration of instruments with ethnic validity. Beginning from the techniques originally proposed by Figueroa et al. (1981 and later described by Lagunes (1993, the intention is to offer a didactic panorama to carry out the measurement by semantic networks introducing some recommendations derived from the studies performed with this method.

  8. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  9. Investigating DMOs through the Lens of Social Network Analysis: Theoretical Gaps, Methodological Challenges and Practitioner Perspectives

    Directory of Open Access Journals (Sweden)

    Dean HRISTOV

    2015-06-01

    Full Text Available The extant literature on networks in tourism management research has traditionally acknowledged destinations as the primary unit of analysis. This paper takes an alternative perspective and positions Destination Management Organisations (DMOs at the forefront of today’s tourism management research agenda. Whilst providing a relatively structured approach to generating enquiry, network research vis-à-vis Social Network Analysis (SNA in DMOs is often surrounded by serious impediments. Embedded in the network literature, this conceptual article aims to provide a practitioner perspective on addressing the obstacles to undertaking network studies in DMO organisations. A simple, three-step methodological framework for investigating DMOs as interorganisational networks of member organisations is proposed in response to complexities in network research. The rationale behind introducing such framework lies in the opportunity to trigger discussions and encourage further academic contributions embedded in both theory and practice. Academic and practitioner contributions are likely to yield insights into the importance of network methodologies applied to DMO organisations.

  10. A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.

    Science.gov (United States)

    Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing

    2015-01-01

    This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.

  11. A multi-criteria decision aid methodology to design electric vehicles public charging networks

    Directory of Open Access Journals (Sweden)

    João Raposo

    2015-05-01

    Full Text Available This article presents a new multi-criteria decision aid methodology, dynamic-PROMETHEE, here used to design electric vehicle charging networks. In applying this methodology to a Portuguese city, results suggest that it is effective in designing electric vehicle charging networks, generating time and policy based scenarios, considering offer and demand and the city’s urban structure. Dynamic-PROMETHE adds to the already known PROMETHEE’s characteristics other useful features, such as decision memory over time, versatility and adaptability. The case study, used here to present the dynamic-PROMETHEE, served as inspiration and base to create this new methodology. It can be used to model different problems and scenarios that may present similar requirement characteristics.

  12. A multi-criteria decision aid methodology to design electric vehicles public charging networks

    Science.gov (United States)

    Raposo, João; Rodrigues, Ana; Silva, Carlos; Dentinho, Tomaz

    2015-05-01

    This article presents a new multi-criteria decision aid methodology, dynamic-PROMETHEE, here used to design electric vehicle charging networks. In applying this methodology to a Portuguese city, results suggest that it is effective in designing electric vehicle charging networks, generating time and policy based scenarios, considering offer and demand and the city's urban structure. Dynamic-PROMETHE adds to the already known PROMETHEE's characteristics other useful features, such as decision memory over time, versatility and adaptability. The case study, used here to present the dynamic-PROMETHEE, served as inspiration and base to create this new methodology. It can be used to model different problems and scenarios that may present similar requirement characteristics.

  13. Methodology for neural networks prototyping. Application to traffic control

    Energy Technology Data Exchange (ETDEWEB)

    Belegan, I.C.

    1998-07-01

    The work described in this report was carried out in the context of the European project ASTORIA (Advanced Simulation Toolbox for Real-World Industrial Application in Passenger Management and Adaptive Control), and concerns the development of an advanced toolbox for complex transportation systems. Our work was focused on the methodology for prototyping a set of neural networks corresponding to specific strategies for traffic control and congestion management. The tool used for prototyping is SNNS (Stuggart Neural Network Simulator), developed at the University of Stuggart, Institute for Parallel and Distributed High Performance Systems, and the real data from the field were provided by ZELT. This report is structured into six parts. The introduction gives some insights about traffic control and its approaches. The second chapter discusses the various control strategies existing. The third chapter is an introduction to the field of neural networks. The data analysis and pre-processing is described in the fourth chapter. In the fifth chapter, the methodology for prototyping the neural networks is presented. Finally, conclusions and further work are presented. (author) 14 refs.

  14. Delayed switching applied to memristor neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Frank Z.; Yang Xiao; Lim Guan [Future Computing Group, School of Computing, University of Kent, Canterbury (United Kingdom); Helian Na [School of Computer Science, University of Hertfordshire, Hatfield (United Kingdom); Wu Sining [Xyratex, Havant (United Kingdom); Guo Yike [Department of Computing, Imperial College, London (United Kingdom); Rashid, Md Mamunur [CERN, Geneva (Switzerland)

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  15. Delayed switching applied to memristor neural networks

    International Nuclear Information System (INIS)

    Wang, Frank Z.; Yang Xiao; Lim Guan; Helian Na; Wu Sining; Guo Yike; Rashid, Md Mamunur

    2012-01-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  16. Consensus-based methodology for detection communities in multilayered networks

    Science.gov (United States)

    Karimi-Majd, Amir-Mohsen; Fathian, Mohammad; Makrehchi, Masoud

    2018-03-01

    Finding groups of network users who are densely related with each other has emerged as an interesting problem in the area of social network analysis. These groups or so-called communities would be hidden behind the behavior of users. Most studies assume that such behavior could be understood by focusing on user interfaces, their behavioral attributes or a combination of these network layers (i.e., interfaces with their attributes). They also assume that all network layers refer to the same behavior. However, in real-life networks, users' behavior in one layer may differ from their behavior in another one. In order to cope with these issues, this article proposes a consensus-based community detection approach (CBC). CBC finds communities among nodes at each layer, in parallel. Then, the results of layers should be aggregated using a consensus clustering method. This means that different behavior could be detected and used in the analysis. As for other significant advantages, the methodology would be able to handle missing values. Three experiments on real-life and computer-generated datasets have been conducted in order to evaluate the performance of CBC. The results indicate superiority and stability of CBC in comparison to other approaches.

  17. A neural network based methodology to predict site-specific spectral acceleration values

    Science.gov (United States)

    Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.

    2010-12-01

    A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.

  18. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  19. Teaching and Learning Methodologies Supported by ICT Applied in Computer Science

    Science.gov (United States)

    Capacho, Jose

    2016-01-01

    The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory.…

  20. Methodological Approaches to Locating Outlets of the Franchise Retail Network

    Directory of Open Access Journals (Sweden)

    Grygorenko Tetyana M.

    2016-08-01

    Full Text Available Methodical approaches to selecting strategic areas of managing the future location of franchise retail network outlets are presented. The main stages in the assessment of strategic areas of managing the future location of franchise retail network outlets have been determined and the evaluation criteria have been suggested. Since such selection requires consideration of a variety of indicators and directions of the assessment, the author proposes a scale of evaluation, which allows generalizing and organizing the research data and calculations of the previous stages of the analysis. The most important criteria and sequence of the selection of the potential franchisees for the franchise retail network have been identified, the technique for their evaluation has been proposed. The use of the suggested methodological approaches will allow the franchiser making sound decisions on the selection of potential target markets, minimizing expenditures of time and efforts on the selection of franchisees and hence optimizing the process of development of the franchise retail network, which will contribute to the formation of its structure.

  1. METHODOLOGY OF RESEARCH AND DEVELOPMENT MANAGEMENT OF REGIONAL NETWORK ECONOMY

    Directory of Open Access Journals (Sweden)

    O.I. Botkin

    2007-06-01

    Full Text Available Information practically of all the Russian regions economy branches and development by managing subjects is information − communicative the Internet technologies render huge influence on economic attitudes development in the environment of regional business: there are new forms of interaction of managing subjects and change is information − organizational structures of regional business management. Integrated image of the set forth above innovations is the regional network economy representing the interactive environment in which on high speed and with minimal transaction (R.H.Coase’s costs are performed social economic and commodity monetary attitudes between managing subjects of region with use of Internet global network interactive opportunities. The urgency of the regional network economy phenomenon research, first of all, is caused by necessity of a substantiation of regional network economy methodology development and management mechanisms development by its infrastructure with the purpose of regional business efficiency increase. In our opinion, the decision of these problems will be the defining factor of effective economic development maintenance and russian regions economy growth in the near future.

  2. Earthquake Complex Network applied along the Chilean Subduction Zone.

    Science.gov (United States)

    Martin, F.; Pasten, D.; Comte, D.

    2017-12-01

    In recent years the earthquake complex networks have been used as a useful tool to describe and characterize the behavior of seismicity. The earthquake complex network is built in space, dividing the three dimensional space in cubic cells. If the cubic cell contains a hypocenter, we call this cell like a node. The connections between nodes follows the time sequence of the occurrence of the seismic events. In this sense, we have a spatio-temporal configuration of a specific region using the seismicity in that zone. In this work, we are applying complex networks to characterize the subduction zone along the coast of Chile using two networks: a directed and an undirected network. The directed network takes in consideration the time-direction of the connections, that is very important for the connectivity of the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out from the node i and we add the self-connections (if two seismic events occurred successive in time in the same cubic cell, we have a self-connection). The undirected network is the result of remove the direction of the connections and the self-connections from the directed network. These two networks were building using seismic data events recorded by CSN (Chilean Seismological Center) in Chile. This analysis includes the last largest earthquakes occurred in Iquique (April 2014) and in Illapel (September 2015). The result for the directed network shows a change in the value of the critical exponent along the Chilean coast. The result for the undirected network shows a small-world behavior without important changes in the topology of the network. Therefore, the complex network analysis shows a new form to characterize the Chilean subduction zone with a simple method that could be compared with another methods to obtain more details about the behavior of the seismicity in this region.

  3. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    Energy Technology Data Exchange (ETDEWEB)

    Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  4. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods

  5. Applying the Tropos Methodology for Analysing Web Services Requirements and Reasoning about Qualities of Services

    NARCIS (Netherlands)

    Aiello, Marco; Giorgini, Paolo

    2004-01-01

    The shift in software engineering from the design, implementation and management of isolated software elements towards a network of autonomous interoperable service is calling for a shift in the way software is designed. We propose the use of the agent-oriented methodology Tropos for the analysis of

  6. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    International Nuclear Information System (INIS)

    Park, Sukyoung; Heo, Gyunyoung; Kim, Jung Taek; Kim, Tae Wan

    2014-01-01

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  7. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sukyoung; Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Jung Taek [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Tae Wan [Kepco International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  8. Applied network security monitoring collection, detection, and analysis

    CERN Document Server

    Sanders, Chris

    2013-01-01

    Applied Network Security Monitoring is the essential guide to becoming an NSM analyst from the ground up. This book takes a fundamental approach to NSM, complete with dozens of real-world examples that teach you the key concepts of NSM. Network security monitoring is based on the principle that prevention eventually fails. In the current threat landscape, no matter how much you try, motivated attackers will eventually find their way into your network. At that point, it is your ability to detect and respond to that intrusion that can be the difference between a small incident and a major di

  9. TEACHING AND LEARNING METHODOLOGIES SUPPORTED BY ICT APPLIED IN COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    Jose CAPACHO

    2016-04-01

    Full Text Available The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory. Genetic-Cognitive Psychology Theory and Dialectics Psychology. Based on the theoretical framework the following methodologies were developed: Game Theory, Constructivist Approach, Personalized Teaching, Problem Solving, Cooperative Collaborative learning, Learning projects using ICT. These methodologies were applied to the teaching learning process during the Algorithms and Complexity – A&C course, which belongs to the area of ​​Computer Science. The course develops the concepts of Computers, Complexity and Intractability, Recurrence Equations, Divide and Conquer, Greedy Algorithms, Dynamic Programming, Shortest Path Problem and Graph Theory. The main value of the research is the theoretical support of the methodologies and their application supported by ICT using learning objects. The course aforementioned was built on the Blackboard platform evaluating the operation of methodologies. The results of the evaluation are presented for each of them, showing the learning outcomes achieved by students, which verifies that methodologies are functional.

  10. Building Modelling Methodologies for Virtual District Heating and Cooling Networks

    Energy Technology Data Exchange (ETDEWEB)

    Saurav, Kumar; Choudhury, Anamitra R.; Chandan, Vikas; Lingman, Peter; Linder, Nicklas

    2017-10-26

    District heating and cooling systems (DHC) are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increase the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components interacting with each other. In this paper we present two building methodologies to model the consumer buildings. These models will be further integrated with network model and the control system layer to create a virtual test bed for the entire DHC system. The model is validated using data collected from a real life DHC system located at Lulea, a city on the coast of northern Sweden. The test bed will be then used for simulating various test cases such as peak energy reduction, overall demand reduction etc.

  11. Applying graphs and complex networks to football metric interpretation.

    Science.gov (United States)

    Arriaza-Ardiles, E; Martín-González, J M; Zuniga, M D; Sánchez-Flores, J; de Saa, Y; García-Manso, J M

    2018-02-01

    This work presents a methodology for analysing the interactions between players in a football team, from the point of view of graph theory and complex networks. We model the complex network of passing interactions between players of a same team in 32 official matches of the Liga de Fútbol Profesional (Spain), using a passing/reception graph. This methodology allows us to understand the play structure of the team, by analysing the offensive phases of game-play. We utilise two different strategies for characterising the contribution of the players to the team: the clustering coefficient, and centrality metrics (closeness and betweenness). We show the application of this methodology by analyzing the performance of a professional Spanish team according to these metrics and the distribution of passing/reception in the field. Keeping in mind the dynamic nature of collective sports, in the future we will incorporate metrics which allows us to analyse the performance of the team also according to the circumstances of game-play and to different contextual variables such as, the utilisation of the field space, the time, and the ball, according to specific tactical situations. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Network evolution driven by dynamics applied to graph coloring

    International Nuclear Information System (INIS)

    Wu Jian-She; Li Li-Guang; Yu Xin; Jiao Li-Cheng; Wang Xiao-Hua

    2013-01-01

    An evolutionary network driven by dynamics is studied and applied to the graph coloring problem. From an initial structure, both the topology and the coupling weights evolve according to the dynamics. On the other hand, the dynamics of the network are determined by the topology and the coupling weights, so an interesting structure-dynamics co-evolutionary scheme appears. By providing two evolutionary strategies, a network described by the complement of a graph will evolve into several clusters of nodes according to their dynamics. The nodes in each cluster can be assigned the same color and nodes in different clusters assigned different colors. In this way, a co-evolution phenomenon is applied to the graph coloring problem. The proposed scheme is tested on several benchmark graphs for graph coloring

  13. The application of network teaching in applied optics teaching

    Science.gov (United States)

    Zhao, Huifu; Piao, Mingxu; Li, Lin; Liu, Dongmei

    2017-08-01

    Network technology has become a creative tool of changing human productivity, the rapid development of it has brought profound changes to our learning, working and life. Network technology has many advantages such as rich contents, various forms, convenient retrieval, timely communication and efficient combination of resources. Network information resources have become the new education resources, get more and more application in the education, has now become the teaching and learning tools. Network teaching enriches the teaching contents, changes teaching process from the traditional knowledge explanation into the new teaching process by establishing situation, independence and cooperation in the network technology platform. The teacher's role has shifted from teaching in classroom to how to guide students to learn better. Network environment only provides a good platform for the teaching, we can get a better teaching effect only by constantly improve the teaching content. Changchun university of science and technology introduced a BB teaching platform, on the platform, the whole optical classroom teaching and the classroom teaching can be improved. Teachers make assignments online, students learn independently offline or the group learned cooperatively, this expands the time and space of teaching. Teachers use hypertext form related knowledge of applied optics, rich cases and learning resources, set up the network interactive platform, homework submission system, message board, etc. The teaching platform simulated the learning interest of students and strengthens the interaction in the teaching.

  14. A replication and methodological critique of the study "Evaluating drug trafficking on the Tor Network".

    Science.gov (United States)

    Munksgaard, Rasmus; Demant, Jakob; Branwen, Gwern

    2016-09-01

    The development of cryptomarkets has gained increasing attention from academics, including growing scientific literature on the distribution of illegal goods using cryptomarkets. Dolliver's 2015 article "Evaluating drug trafficking on the Tor Network: Silk Road 2, the Sequel" addresses this theme by evaluating drug trafficking on one of the most well-known cryptomarkets, Silk Road 2.0. The research on cryptomarkets in general-particularly in Dolliver's article-poses a number of new questions for methodologies. This commentary is structured around a replication of Dolliver's original study. The replication study is not based on Dolliver's original dataset, but on a second dataset collected applying the same methodology. We have found that the results produced by Dolliver differ greatly from our replicated study. While a margin of error is to be expected, the inconsistencies we found are too great to attribute to anything other than methodological issues. The analysis and conclusions drawn from studies using these methods are promising and insightful. However, based on the replication of Dolliver's study, we suggest that researchers using these methodologies consider and that datasets be made available for other researchers, and that methodology and dataset metrics (e.g. number of downloaded pages, error logs) are described thoroughly in the context of web-o-metrics and web crawling. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Event based uncertainty assessment in urban drainage modelling, applying the GLUE methodology

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Beven, K.J.; Jensen, Jacob Birk

    2008-01-01

    of combined sewer overflow. The GLUE methodology is used to test different conceptual setups in order to determine if one model setup gives a better goodness of fit conditional on the observations than the other. Moreover, different methodological investigations of GLUE are conducted in order to test......In the present paper an uncertainty analysis on an application of the commercial urban drainage model MOUSE is conducted. Applying the Generalized Likelihood Uncertainty Estimation (GLUE) methodology the model is conditioned on observation time series from two flow gauges as well as the occurrence...... if the uncertainty analysis is unambiguous. It is shown that the GLUE methodology is very applicable in uncertainty analysis of this application of an urban drainage model, although it was shown to be quite difficult of get good fits of the whole time series....

  16. CellNet: Network Biology Applied to Stem Cell Engineering

    Science.gov (United States)

    Cahan, Patrick; Li, Hu; Morris, Samantha A.; da Rocha, Edroaldo Lummertz; Daley, George Q.; Collins, James J.

    2014-01-01

    SUMMARY Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population, and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering. PMID:25126793

  17. Applying Trusted Network Technology To Process Control Systems

    Science.gov (United States)

    Okhravi, Hamed; Nicol, David

    Interconnections between process control networks and enterprise networks expose instrumentation and control systems and the critical infrastructure components they operate to a variety of cyber attacks. Several architectural standards and security best practices have been proposed for industrial control systems. However, they are based on older architectures and do not leverage the latest hardware and software technologies. This paper describes new technologies that can be applied to the design of next generation security architectures for industrial control systems. The technologies are discussed along with their security benefits and design trade-offs.

  18. A study on methodologies for assessing safety critical network's risk impact on Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lim, T. J.; Lee, H. J.; Park, S. K.; Seo, S. J.

    2006-08-01

    The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for Nuclear Power Plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of the first year study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks

  19. Research Network of Tehran Defined Population: Methodology and Establishment

    Directory of Open Access Journals (Sweden)

    Ali-Asghar Kolahi

    2015-12-01

    Full Text Available Background: We need a defined population for determining prevalence and incidence of diseases, as well as conducting interventional, cohort and longitudinal studies, calculating correct and timely public health indicators, assessing actual health needs of community, performing educational programs and interventions to promote healthy lifestyle, and enhancing quality of primary health services.The objective of this project was to determine a defined population which is representative of Tehran, the Capital of Iran. This article reports the methodology and establishment of the research network of Tehran defined population.Methods: This project started by selecting two urban health centers from each of the five district health centers affiliated to Shahid Beheshti University of Medical Sciences in 2012. Inside each selected urban health center, one defined population research station was established. Two new centers have been added during 2013 and 2014. For the time being, the number of the covered population of the network has reached 40000 individuals. The most important criterion for the defined population has been to be representative of the population of Tehran. For this, we selected two urban health centers from 12 of 22 municipality districts and from each of the five different socioeconomic of Greater Tehran. Merely 80000 individuals in neighborhoods of each defined population research station were considered as control group of the project.Findings: Totally we selected 12 defined population research stations and their under-covered population developed a defined population which is representative of Tehran population.Conclusion: a population lab is ready now in metropolitan of Tehran.

  20. A guide to distribution network operator connection and use of system methodologies and charging

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-05-04

    This report aims to help those developing or planning to develop power generation schemes connected to local electricity distribution systems (distributed generation) to understand the various regional network charging schemes in the UK. It is also intended to act as a route map to understand distribution charges as they are currently applied; further changes in charging arrangements between 2005 and 2010 are indicated and signposts to sources of further information are provided. The report explains the structure of distribution networks, the outcome of the regulatory review of distribution pricing undertaken by the Office of Gas and Electricity Markets (Ofgem) applicable from 1 April 2005 and how this affects distribution network operators (DNOs) and their distribution charges. The report considers: the energy policy framework in the UK; the commercial and regulatory framework that applies to distributed generators; DNOs and their regulatory framework; network charging methodologies and principles; charging statements; and areas of likely future changes. Individual schedules and contact details are given in an appendix for each DNO.

  1. Applying differential dynamic logic to reconfigurable biological networks.

    Science.gov (United States)

    Figueiredo, Daniel; Martins, Manuel A; Chaves, Madalena

    2017-09-01

    Qualitative and quantitative modeling frameworks are widely used for analysis of biological regulatory networks, the former giving a preliminary overview of the system's global dynamics and the latter providing more detailed solutions. Another approach is to model biological regulatory networks as hybrid systems, i.e., systems which can display both continuous and discrete dynamic behaviors. Actually, the development of synthetic biology has shown that this is a suitable way to think about biological systems, which can often be constructed as networks with discrete controllers, and present hybrid behaviors. In this paper we discuss this approach as a special case of the reconfigurability paradigm, well studied in Computer Science (CS). In CS there are well developed computational tools to reason about hybrid systems. We argue that it is worth applying such tools in a biological context. One interesting tool is differential dynamic logic (dL), which has recently been developed by Platzer and applied to many case-studies. In this paper we discuss some simple examples of biological regulatory networks to illustrate how dL can be used as an alternative, or also as a complement to methods already used. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Forecasting Baltic Dirty Tanker Index by Applying Wavelet Neural Networks

    DEFF Research Database (Denmark)

    Fan, Shuangrui; JI, TINGYUN; Bergqvist, Rickard

    2013-01-01

    modeling techniques used in freight rate forecasting. At the same time research in shipping index forecasting e.g. BDTI applying artificial intelligent techniques is scarce. This analyses the possibilities to forecast the BDTI by applying Wavelet Neural Networks (WNN). Firstly, the characteristics...... of traditional and artificial intelligent forecasting techniques are discussed and rationales for choosing WNN are explained. Secondly, the components and features of BDTI will be explicated. After that, the authors delve the determinants and influencing factors behind fluctuations of the BDTI in order to set...

  3. Training of reverse propagation neural networks applied to neutron dosimetry

    International Nuclear Information System (INIS)

    Hernandez P, C. F.; Martinez B, M. R.; Leon P, A. A.; Espinoza G, J. G.; Castaneda M, V. H.; Solis S, L. O.; Castaneda M, R.; Ortiz R, M.; Vega C, H. R.; Mendez V, R.; Gallego, E.; De Sousa L, M. A.

    2016-10-01

    methodology of artificial neural networks where the parameters of the network that produced the best results were selected. (Author)

  4. Covariance methodology applied to uncertainties in I-126 disintegration rate measurements

    International Nuclear Information System (INIS)

    Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.

    1996-01-01

    The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)

  5. Second Law of Thermodynamics Applied to Metabolic Networks

    Science.gov (United States)

    Nigam, R.; Liang, S.

    2003-01-01

    We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.

  6. Validation methodology focussing on fuel efficiency as applied in the eCoMove project

    NARCIS (Netherlands)

    Themann, P.; Iasi, L.; Larburu, M.; Trommer, S.

    2012-01-01

    This paper discusses the validation approach applied in the eCoMove project (a large scale EU 7th Framework Programme project). In this project, applications are developed that on the one hand optimise network-wide traffic management and control, and on the other hand advise drivers on the most

  7. Artificial neural network applying for justification of tractors undercarriages parameters

    Directory of Open Access Journals (Sweden)

    V. A. Kuz’Min

    2017-01-01

    Full Text Available One of the most important properties that determine undercarriage layout on design stage is the soil compaction effect. Existing domestic standards of undercarriages impact to soil do not meet modern agricultural requirements completely. The authors justify the need for analysis of traction and transportation machines travel systems and recommendations for these parameters applied to machines that are on design or modernization stage. The database of crawler agricultural tractors particularly in such parameters as traction class and basic operational weight, engine power rating, average ground pressure, square of track basic branch surface area was modeled. Meanwhile the considered machines were divided into two groups by producing countries: Europe/North America and Russian Federation/CIS. The main graphical dependences for every group of machines are plotted, and the conforming analytical dependences within the ranges with greatest concentration of machines are generated. To make the procedure of obtaining parameters of the soil panning by tractors easier it is expedient to use the program tool - artificial neural network (or perceptron. It is necessary to apply to the solution of this task multilayered perceptron - neutron network of direct distribution of signals (without feedback. To carry out the analysis of parameters of running systems taking into account parameters of the soil panning by them and to recommend the choice of these parameters for newly created machines. The program code of artificial neural network is developed. On the basis of the created base of tractors the artificial neural network was created and tested. Accumulated error was not more than 5 percent. These data indicate the results accuracy and tool reliability. It is possible by operating initial design-data base and using the designed artificial neural network to define missing parameters.

  8. A Data Preparation Methodology in Data Mining Applied to Mortality Population Databases.

    Science.gov (United States)

    Pérez, Joaquín; Iturbide, Emmanuel; Olivares, Víctor; Hidalgo, Miguel; Martínez, Alicia; Almanza, Nelva

    2015-11-01

    It is known that the data preparation phase is the most time consuming in the data mining process, using up to 50% or up to 70% of the total project time. Currently, data mining methodologies are of general purpose and one of their limitations is that they do not provide a guide about what particular task to develop in a specific domain. This paper shows a new data preparation methodology oriented to the epidemiological domain in which we have identified two sets of tasks: General Data Preparation and Specific Data Preparation. For both sets, the Cross-Industry Standard Process for Data Mining (CRISP-DM) is adopted as a guideline. The main contribution of our methodology is fourteen specialized tasks concerning such domain. To validate the proposed methodology, we developed a data mining system and the entire process was applied to real mortality databases. The results were encouraging because it was observed that the use of the methodology reduced some of the time consuming tasks and the data mining system showed findings of unknown and potentially useful patterns for the public health services in Mexico.

  9. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    Science.gov (United States)

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  10. Neural networks (NN applied to the commercial properties valuation

    Directory of Open Access Journals (Sweden)

    J. M. Núñez Tabales

    2017-03-01

    Full Text Available Several agents, such as buyers and sellers, or local or tax authorities need to estimate the value of properties. There are different approaches to obtain the market price of a dwelling. Many papers have been produced in the academic literature for such purposes, but, these are, almost always, oriented to estimate hedonic prices of residential properties, such as houses or apartments. Here these methodologies are used in the field of estimate market price of commercial premises, using AI techniques. A case study is developed in Cordova —city in the South of Spain—. Neural Networks are an attractive alternative to the traditional hedonic modelling approaches, as they are better adapted to non-linearities of causal relationships and they also produce smaller valuation errors. It is also possible, from the NN model, to obtain implicit prices associated to the main attributes that can explain the variability of the market price of commercial properties.

  11. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil); Pereira, Iraci Martinez; Silva, Antonio Teixeira e, E-mail: martinez@ipen.b, E-mail: teixeira@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  12. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez; Silva, Antonio Teixeira e

    2011-01-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  13. Carbohydrate metabolism teaching strategy for the Pharmacy course, applying active teaching methodology

    Directory of Open Access Journals (Sweden)

    Uderlei Donizete Silveira Covizzi

    2012-12-01

    Full Text Available The traditional teaching method has been widely questioned on the development of skills and abilities in training healthcare professionals. In the traditional methodology the main transmitter of knowledge is the teacher while students assume passive spectator role. Some Brazilian institutions broke with this model, structuring the curriculum to student-centered learning. Some medical schools have adopted the Problem Based Learning (PBL, a methodology that presents problem questions, to be encountered by future physicians, for resolution in small tutorial groups. Our work proposes to apply an active teaching-learning methodology addressing carbohydrate metabolism during the discipline of biochemistry for under graduation students from pharmacy course. Thus, the academic content was presented through brief and objective talks. Later, learners were split into tutorial groups for the resolution of issues in context. During the activities, the teacher drove the discussion to the issues elucidation. At the end of the module learners evaluated the teaching methodology by means of an applied questionnaire and the developed content was evaluated by an usual individual test. The questionnaire analysis indicates that students believe they have actively participated in the teaching-learning process, being encouraged to discuss and understand the theme. The answers highlight closer ties between students and tutor. According to the professor, there is a greater student engagement with learning. It is concluded that an innovative methodology, where the primary responsibility for learning is centered in the student himself, besides to increase the interest in learning, facilitates learning by cases discussion in groups. The issues contextualization establishes a narrowing between theory and practice.

  14. Collaboration Networks in Applied Conservation Projects across Europe.

    Science.gov (United States)

    Nita, Andreea; Rozylowicz, Laurentiu; Manolache, Steluta; Ciocănea, Cristiana Maria; Miu, Iulia Viorica; Popescu, Viorel Dan

    2016-01-01

    The main funding instrument for implementing EU policies on nature conservation and supporting environmental and climate action is the LIFE Nature programme, established by the European Commission in 1992. LIFE Nature projects (>1400 awarded) are applied conservation projects in which partnerships between institutions are critical for successful conservation outcomes, yet little is known about the structure of collaborative networks within and between EU countries. The aim of our study is to understand the nature of collaboration in LIFE Nature projects using a novel application of social network theory at two levels: (1) collaboration between countries, and (2) collaboration within countries using six case studies: Western Europe (United Kingdom and Netherlands), Eastern Europe (Romania and Latvia) and Southern Europe (Greece and Portugal). Using data on 1261 projects financed between 1996 and 2013, we found that Italy was the most successful country not only in terms of awarded number of projects, but also in terms of overall influence being by far the most influent country in the European LIFE Nature network, having the highest eigenvector (0.989) and degree centrality (0.177). Another key player in the network is Netherlands, which ensures a fast communication flow with other network members (closeness-0.318) by staying connected with the most active countries. Although Western European countries have higher centrality scores than most of the Eastern European countries, our results showed that overall there is a lower tendency to create partnerships between different organization categories. Also, the comparisons of the six case studies indicates significant differences in regards to the pattern of creating partnerships, providing valuable information on collaboration on EU nature conservation. This study represents a starting point in predicting the formation of future partnerships within LIFE Nature programme, suggesting ways to improve transnational

  15. CellNet: network biology applied to stem cell engineering.

    Science.gov (United States)

    Cahan, Patrick; Li, Hu; Morris, Samantha A; Lummertz da Rocha, Edroaldo; Daley, George Q; Collins, James J

    2014-08-14

    Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Signed directed social network analysis applied to group conflict

    DEFF Research Database (Denmark)

    Zheng, Quan; Skillicorn, David; Walther, Olivier

    2015-01-01

    Real-world social networks contain relationships of multiple different types, but this richness is often ignored in graph-theoretic modelling. We show how two recently developed spectral embedding techniques, for directed graphs (relationships are asymmetric) and for signed graphs (relationships...... are both positive and negative), can be combined. This combination is particularly appropriate for intelligence, terrorism, and law enforcement applications. We illustrate by applying the novel embedding technique to datasets describing conflict in North-West Africa, and show how unusual interactions can...

  17. Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.

    2014-01-01

    The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system

  18. Applying a life cycle decision methodology to Fernald waste management alternatives

    International Nuclear Information System (INIS)

    Yuracko, K.L.; Gresalfi, M.; Yerace, P.

    1996-01-01

    During the past five years, a number of U.S. Department of Energy (DOE) funded efforts have demonstrated the technical efficacy of converting various forms of radioactive scrap metal (RSM) into useable products. From the development of large accelerator shielding blocks, to the construction of low-level waste containers, technology has been applied to this fabrication process in a safe and stakeholder supported manner. The potential health and safety risks to both workers and the public have been addressed. The question remains: can products be fabricated from RSM in a cost efficient and market competitive manner? This paper presents a methodology for use within DOE to evaluate the costs and benefits of recycling and reusing some RSM, rather than disposing of this RSM in an approved burial site. This life cycle decision methodology, developed by both the Oak Ridge National Laboratory (ORNL) and DOE Fernald, is the focus of the following analysis

  19. Adaption of the temporal correlation coefficient calculation for temporal networks (applied to a real-world pig trade network).

    Science.gov (United States)

    Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim

    2016-01-01

    The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.

  20. How can social network analysis contribute to social behavior research in applied ethology?

    Science.gov (United States)

    Makagon, Maja M; McCowan, Brenda; Mench, Joy A

    2012-05-01

    Social network analysis is increasingly used by behavioral ecologists and primatologists to describe the patterns and quality of interactions among individuals. We provide an overview of this methodology, with examples illustrating how it can be used to study social behavior in applied contexts. Like most kinds of social interaction analyses, social network analysis provides information about direct relationships (e.g. dominant-subordinate relationships). However, it also generates a more global model of social organization that determines how individual patterns of social interaction relate to individual and group characteristics. A particular strength of this approach is that it provides standardized mathematical methods for calculating metrics of sociality across levels of social organization, from the population and group levels to the individual level. At the group level these metrics can be used to track changes in social network structures over time, evaluate the effect of the environment on social network structure, or compare social structures across groups, populations or species. At the individual level, the metrics allow quantification of the heterogeneity of social experience within groups and identification of individuals who may play especially important roles in maintaining social stability or information flow throughout the network.

  1. Simulation and Optimization Methodologies for Military Transportation Network Routing and Scheduling and for Military Medical Services

    National Research Council Canada - National Science Library

    Rodin, Ervin Y

    2005-01-01

    The purpose of this present research was to develop a generic model and methodology for analyzing and optimizing large-scale air transportation networks including both their routing and their scheduling...

  2. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    Energy Technology Data Exchange (ETDEWEB)

    Vismari, Lucio Flavio, E-mail: lucio.vismari@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil); Batista Camargo Junior, Joao, E-mail: joaocamargo@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil)

    2011-07-15

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  3. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    International Nuclear Information System (INIS)

    Vismari, Lucio Flavio; Batista Camargo Junior, Joao

    2011-01-01

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  4. Actor/Actant-Network Theory as Emerging Methodology for ...

    African Journals Online (AJOL)

    4carolinebell@gmail.com

    2005-01-31

    Jan 31, 2005 ... to trace relationships, actors, actants and actor/actant-networks .... associated with a particular type of social theory (Latour, 1987; ..... the Department of Environmental Affairs and Tourism, Organised Business and Organised.

  5. An applied methodology for assessment of the sustainability of biomass district heating systems

    Science.gov (United States)

    Vallios, Ioannis; Tsoutsos, Theocharis; Papadakis, George

    2016-03-01

    In order to maximise the share of biomass in the energy supplying system, the designers should adopt the appropriate changes to the traditional systems and become more familiar with the design details of the biomass heating systems. The aim of this study is to present the development of methodology and its associated implementation in software that is useful for the design of biomass thermal conversion systems linked with district heating (DH) systems, taking into consideration the types of building structures and urban settlement layout around the plant. The methodology is based on a completely parametric logic, providing an impact assessment of variations in one or more technical and/or economic parameters and thus, facilitating a quick conclusion on the viability of this particular energy system. The essential energy parameters are presented and discussed for the design of biomass power and heat production system which are in connection with DH network, as well as for its environmental and economic evaluation (i.e. selectivity and viability of the relevant investment). Emphasis has been placed upon the technical parameters of biomass logistics, energy system's design, the economic details of the selected technology (integrated cogeneration combined cycle or direct combustion boiler), the DH network and peripheral equipment (thermal substations) and the greenhouse gas emissions. The purpose of this implementation is the assessment of the pertinent investment financial viability taking into account the available biomass feedstock, the economical and market conditions, and the capital/operating costs. As long as biomass resources (forest wood and cultivation products) are available and close to the settlement, disposal and transportation costs of biomass, remain low assuring the sustainability of such energy systems.

  6. Fiber-Optic Temperature and Pressure Sensors Applied to Radiofrequency Thermal Ablation in Liver Phantom: Methodology and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Daniele Tosi

    2015-01-01

    Full Text Available Radiofrequency thermal ablation (RFA is a procedure aimed at interventional cancer care and is applied to the treatment of small- and midsize tumors in lung, kidney, liver, and other tissues. RFA generates a selective high-temperature field in the tissue; temperature values and their persistency are directly related to the mortality rate of tumor cells. Temperature measurement in up to 3–5 points, using electrical thermocouples, belongs to the present clinical practice of RFA and is the foundation of a physical model of the ablation process. Fiber-optic sensors allow extending the detection of biophysical parameters to a vast plurality of sensing points, using miniature and noninvasive technologies that do not alter the RFA pattern. This work addresses the methodology for optical measurement of temperature distribution and pressure using four different fiber-optic technologies: fiber Bragg gratings (FBGs, linearly chirped FBGs (LCFBGs, Rayleigh scattering-based distributed temperature system (DTS, and extrinsic Fabry-Perot interferometry (EFPI. For each instrument, methodology for ex vivo sensing, as well as experimental results, is reported, leading to the application of fiber-optic technologies in vivo. The possibility of using a fiber-optic sensor network, in conjunction with a suitable ablation device, can enable smart ablation procedure whereas ablation parameters are dynamically changed.

  7. Methodological Approaches to Locating Outlets of the Franchise Retail Network

    OpenAIRE

    Grygorenko Tetyana M.

    2016-01-01

    Methodical approaches to selecting strategic areas of managing the future location of franchise retail network outlets are presented. The main stages in the assessment of strategic areas of managing the future location of franchise retail network outlets have been determined and the evaluation criteria have been suggested. Since such selection requires consideration of a variety of indicators and directions of the assessment, the author proposes a scale of evaluation, which ...

  8. A review of methodologies applied in Australian practice to evaluate long-term coastal adaptation options

    Directory of Open Access Journals (Sweden)

    Timothy David Ramm

    2017-01-01

    Full Text Available Rising sea levels have the potential to alter coastal flooding regimes around the world and local governments are beginning to consider how to manage uncertain coastal change. In doing so, there is increasing recognition that such change is deeply uncertain and unable to be reliably described with probabilities or a small number of scenarios. Characteristics of methodologies applied in Australian practice to evaluate long-term coastal adaptation options are reviewed and benchmarked against two state-of-the-art international methods suited for conditions of uncertainty (Robust Decision Making and Dynamic Adaptive Policy Pathways. Seven out of the ten Australian case studies assumed the uncertain parameters, such as sea level rise, could be described deterministically or stochastically when identifying risk and evaluating adaptation options across multi-decadal periods. This basis is not considered sophisticated enough for long-term decision-making, implying that Australian practice needs to increase the use of scenarios to explore a much larger uncertainty space when assessing the performance of adaptation options. Two Australian case studies mapped flexible adaptation pathways to manage uncertainty, and there remains an opportunity to incorporate quantitative methodologies to support the identification of risk thresholds. The contextual framing of risk, including the approach taken to identify risk (top-down or bottom-up and treatment of uncertain parameters, were found to be fundamental characteristics that influenced the methodology selected to evaluate adaptation options. The small sample of case studies available suggests that long-term coastal adaptation in Australian is in its infancy and there is a timely opportunity to guide local government towards robust methodologies for developing long-term coastal adaptation plans.

  9. Adding value in oil and gas by applying decision analysis methodologies: case history

    Energy Technology Data Exchange (ETDEWEB)

    Marot, Nicolas [Petro Andina Resources Inc., Alberta (Canada); Francese, Gaston [Tandem Decision Solutions, Buenos Aires (Argentina)

    2008-07-01

    Petro Andina Resources Ltd. together with Tandem Decision Solutions developed a strategic long range plan applying decision analysis methodology. The objective was to build a robust and fully integrated strategic plan that accomplishes company growth goals to set the strategic directions for the long range. The stochastic methodology and the Integrated Decision Management (IDM{sup TM}) staged approach allowed the company to visualize the associated value and risk of the different strategies while achieving organizational alignment, clarity of action and confidence in the path forward. A decision team involving jointly PAR representatives and Tandem consultants was established to carry out this four month project. Discovery and framing sessions allow the team to disrupt the status quo, discuss near and far reaching ideas and gather the building blocks from which creative strategic alternatives were developed. A comprehensive stochastic valuation model was developed to assess the potential value of each strategy applying simulation tools, sensitivity analysis tools and contingency planning techniques. Final insights and results have been used to populate the final strategic plan presented to the company board providing confidence to the team, assuring that the work embodies the best available ideas, data and expertise, and that the proposed strategy was ready to be elaborated into an optimized course of action. (author)

  10. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  11. Applying information network analysis to fire-prone landscapes: implications for community resilience

    Directory of Open Access Journals (Sweden)

    Derric B. Jacobs

    2017-03-01

    Full Text Available Resilient communities promote trust, have well-developed networks, and can adapt to change. For rural communities in fire-prone landscapes, current resilience strategies may prove insufficient in light of increasing wildfire risks due to climate change. It is argued that, given the complexity of climate change, adaptations are best addressed at local levels where specific social, cultural, political, and economic conditions are matched with local risks and opportunities. Despite the importance of social networks as key attributes of community resilience, research using social network analysis on coupled human and natural systems is scarce. Furthermore, the extent to which local communities in fire-prone areas understand climate change risks, accept the likelihood of potential changes, and have the capacity to develop collaborative mitigation strategies is underexamined, yet these factors are imperative to community resiliency. We apply a social network framework to examine information networks that affect perceptions of wildfire and climate change in Central Oregon. Data were collected using a mailed questionnaire. Analysis focused on the residents' information networks that are used to gain awareness of governmental activities and measures of community social capital. A two-mode network analysis was used to uncover information exchanges. Results suggest that the general public develops perceptions about climate change based on complex social and cultural systems rather than as patrons of scientific inquiry and understanding. It appears that perceptions about climate change itself may not be the limiting factor in these communities' adaptive capacity, but rather how they perceive local risks. We provide a novel methodological approach in understanding rural community adaptation and resilience in fire-prone landscapes and offer a framework for future studies.

  12. A social network perspective on teacher collaboration in schools: Theory, methodology, and applications

    NARCIS (Netherlands)

    Moolenaar, Nienke

    2012-01-01

    An emerging trend in educational research is the use of social network theory and methodology to understand how teacher collaboration can support or constrain teaching, learning, and educational change. This article provides a critical synthesis of educational literature on school social networks

  13. A Methodology for Physical Interconnection Decisions of Next Generation Transport Networks

    DEFF Research Database (Denmark)

    Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir; Madsen, Ole Brun

    2011-01-01

    of possibilities when designing the physical network interconnection. This paper develops and presents a methodology in order to deal with aspects related to the interconnection problem of optical transport networks. This methodology is presented as independent puzzle pieces, covering diverse topics going from......The physical interconnection for optical transport networks has critical relevance in the overall network performance and deployment costs. As telecommunication services and technologies evolve, the provisioning of higher capacity and reliability levels is becoming essential for the proper...... development of Next Generation Networks. Currently, there is a lack of specific procedures that describe the basic guidelines to design such networks better than "best possible performance for the lowest investment". Therefore, the research from different points of view will allow a broader space...

  14. A Methodology for Assessing Eco-Efficiency in Logistics Networks

    NARCIS (Netherlands)

    Quariguasi Frota Neto, J.; Walther, G.; Bloemhof, J.M.; Nunen, van J.A.E.E.; Spengler, T.

    2009-01-01

    Recent literature on sustainable logistics networks points to two important questions: (i) How to spot the preferred solution(s) balancing environmental and business concerns? (ii) How to improve the understanding of the trade-offs between these two dimensions? We posit that a visual exploration of

  15. A Methodology for Assessing Eco-efficiency in Logistics Networks

    NARCIS (Netherlands)

    J. Quariguasi Frota Neto (João); G. Walther; J.M. Bloemhof-Ruwaard (Jacqueline); J.A.E.E. van Nunen (Jo); T. Spengler

    2006-01-01

    textabstractRecent literature on sustainable logistics networks points to two important questions: (i) How to spot the preferred solution(s) balancing environmental and business concerns? (ii) How to improve the understanding of the trade-offs between these two dimensions? We posit that a complete

  16. A Methodology for Assessing Eco-Efficiency in Logistics Networks

    NARCIS (Netherlands)

    J. Quariguasi Frota Neto (João); G. Walther; J.M. Bloemhof-Ruwaard (Jacqueline); J.A.E.E. van Nunen (Jo); T. Spengler

    2007-01-01

    textabstractRecent literature on sustainable logistics networks points to two important questions: (i) How to spot the preferred solution(s) balancing environmental and business concerns? (ii) How to improve the understanding of the trade-offs between these two dimensions? We posit that a complete

  17. Risk Based Inspection Methodology and Software Applied to Atmospheric Storage Tanks

    Science.gov (United States)

    Topalis, P.; Korneliussen, G.; Hermanrud, J.; Steo, Y.

    2012-05-01

    A new risk-based inspection (RBI) methodology and software is presented in this paper. The objective of this work is to allow management of the inspections of atmospheric storage tanks in the most efficient way, while, at the same time, accident risks are minimized. The software has been built on the new risk framework architecture, a generic platform facilitating efficient and integrated development of software applications using risk models. The framework includes a library of risk models and the user interface is automatically produced on the basis of editable schemas. This risk-framework-based RBI tool has been applied in the context of RBI for above-ground atmospheric storage tanks (AST) but it has been designed with the objective of being generic enough to allow extension to the process plants in general. This RBI methodology is an evolution of an approach and mathematical models developed for Det Norske Veritas (DNV) and the American Petroleum Institute (API). The methodology assesses damage mechanism potential, degradation rates, probability of failure (PoF), consequence of failure (CoF) in terms of environmental damage and financial loss, risk and inspection intervals and techniques. The scope includes assessment of the tank floor for soil-side external corrosion and product-side internal corrosion and the tank shell courses for atmospheric corrosion and internal thinning. It also includes preliminary assessment for brittle fracture and cracking. The data are structured according to an asset hierarchy including Plant, Production Unit, Process Unit, Tag, Part and Inspection levels and the data are inherited / defaulted seamlessly from a higher hierarchy level to a lower level. The user interface includes synchronized hierarchy tree browsing, dynamic editor and grid-view editing and active reports with drill-in capability.

  18. Applied methodology for replacement pipe arcs in integral pipelines TE 'Oslomej'

    Directory of Open Access Journals (Sweden)

    Temelkoska Bratica K.

    2016-01-01

    Full Text Available The integral pipelines in thermal power plants present a linear spatial bearing construction with high operating parameters, complex static and dynamic load. The integral pipelines along its entire length are hanging on construction spring hangers from the boiler building, where the boiler is placed, next to the machine hall where the turbine is placed. Therefore, it is important to monitor the condition and to remove any possible defects from the applied methods. This paper describes the methodology of replacement of the pipe arch on one of the integral pipelines-the line for hot superheated steam. In addition, in this paper are given the method methods that led to this methodology for testing and evaluation of the condition of the pipe arch material that had been in exploitation and the new pipe arch that will be embedded. Furthermore the approach, the technology of replacement, anchoring of the steam line, technology of welding etc., as well as the preparation of the final design of constructed condition are also covered in this paper.

  19. Case Study: LCA Methodology Applied to Materials Management in a Brazilian Residential Construction Site

    Directory of Open Access Journals (Sweden)

    João de Lassio

    2016-01-01

    Full Text Available The construction industry is increasingly concerned with improving the social, economic, and environmental indicators of sustainability. More than ever, the growing demand for construction materials reflects increased consumption of raw materials and energy, particularly during the phases of extraction, processing, and transportation of materials. This work aims to help decision-makers and to promote life cycle thinking in the construction industry. For this purpose, the life cycle assessment (LCA methodology was chosen to analyze the environmental impacts of building materials used in the construction of a residence project in São Gonçalo, Rio de Janeiro, Brazil. The LCA methodology, based on ISO 14040 and ISO 14044 guidelines, is applied with available databases and the SimaPro program. As a result, this work shows that there is a substantial waste of nonrenewable energy, increasing global warming and harm to human health in this type of construction. This study also points out that, for this type of Brazilian construction, ceramic materials account for a high percentage of the mass of a total building and are thus responsible for the majority of environmental impacts.

  20. Assessment of network perturbation amplitudes by applying high-throughput data to causal biological networks

    Directory of Open Access Journals (Sweden)

    Martin Florian

    2012-05-01

    Full Text Available Abstract Background High-throughput measurement technologies produce data sets that have the potential to elucidate the biological impact of disease, drug treatment, and environmental agents on humans. The scientific community faces an ongoing challenge in the analysis of these rich data sources to more accurately characterize biological processes that have been perturbed at the mechanistic level. Here, a new approach is built on previous methodologies in which high-throughput data was interpreted using prior biological knowledge of cause and effect relationships. These relationships are structured into network models that describe specific biological processes, such as inflammatory signaling or cell cycle progression. This enables quantitative assessment of network perturbation in response to a given stimulus. Results Four complementary methods were devised to quantify treatment-induced activity changes in processes described by network models. In addition, companion statistics were developed to qualify significance and specificity of the results. This approach is called Network Perturbation Amplitude (NPA scoring because the amplitudes of treatment-induced perturbations are computed for biological network models. The NPA methods were tested on two transcriptomic data sets: normal human bronchial epithelial (NHBE cells treated with the pro-inflammatory signaling mediator TNFα, and HCT116 colon cancer cells treated with the CDK cell cycle inhibitor R547. Each data set was scored against network models representing different aspects of inflammatory signaling and cell cycle progression, and these scores were compared with independent measures of pathway activity in NHBE cells to verify the approach. The NPA scoring method successfully quantified the amplitude of TNFα-induced perturbation for each network model when compared against NF-κB nuclear localization and cell number. In addition, the degree and specificity to which CDK

  1. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect

    DEFF Research Database (Denmark)

    Triantafyllou, Evangelia; Kofoed, Lise; Purwins, Hendrik

    2016-01-01

    One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class......, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators...... and values of different stakeholders (i.e. institutions, educators, learners, and external agents), which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators...

  2. Applying Fuzzy Artificial Neural Network OSPF to develop Smart ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Fuzzy Artificial Neural Network to create Smart Routing. Protocol Algorithm. ... manufactured mental aptitude strategy. The capacity to study .... Based Energy Efficiency in Wireless Sensor Networks: A Survey",. International ...

  3. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  4. ILUC mitigation case studies Tanzania. Applying the Low Indirect Impact Biofuel (LIIB) Methodology to Tanzanian projects

    Energy Technology Data Exchange (ETDEWEB)

    Van de Staaij, J.; Spoettle, M.; Weddige, U.; Toop, G. [Ecofys, Utrecht (Netherlands)

    2012-10-15

    NL Agency is supporting WWF and the Secretariat of the Roundtable on Sustainable Biofuels (RSB) with the development of a certification module for biofuels with a low risk of indirect land use change (ILUC), the Low Indirect Impact Biofuel (LIIB) methodology (www.LIIBmethodology.org). The LIIB methodology was developed to certify that biomass feedstock for biofuels has been produced with a low risk of indirect impacts. It is designed as an independent module that can be added to biofuel policies and existing certification systems for sustainable biofuel and/or feedstock production, such as the RSB Standard, RSPO or NTA8080. It presents detailed ILUC mitigation approaches for four different solution types field-tested and audited in international pilots. Within the Global Sustainable Biomass programme and the Sustainable Biomass Import programme, coordinated by NL Agency, three projects are working on sustainable jatropha in Tanzania. Ecofys has been commissioned by NL Agency to contribute to the further development of the LIIB methodology by applying it to these three jatropha projects in Tanzania. All three projects located in the North of Tanzania, address sustainability in one way or another, but focus on the direct effects of jatropha cultivation and use. Interestingly, they nevertheless seem to apply different methods that could also minimise negative indirect impacts, including ILUC. Bioenergy feedstock production can have unintended consequences well outside the boundary of production operations. These are indirect impacts, which cannot be directly attributed to a particular operation. The most cited indirect impacts are ILUC and food/feed commodity price increases (an indirect impact on food security). ILUC can occur when existing cropland is used to cover the feedstock demand of additional biofuel production. When this displaces the previous use of the land (e.g. food production) this can lead to expansion of land use to new areas (e.g. deforestation) when

  5. Equity portfolio optimization: A DEA based methodology applied to the Zagreb Stock Exchange

    Directory of Open Access Journals (Sweden)

    Margareta Gardijan

    2015-10-01

    Full Text Available Most strategies for selection portfolios focus on utilizing solely market data and implicitly assume that stock markets communicate all relevant information to all market stakeholders, and that these markets cannot be influenced by investor activities. However convenient, this is a limited approach, especially when applied to small and illiquid markets such as the Croatian market, where such assumptions are hardly realistic. Thus, there is a demand for including other sources of data, such as financial reports. Research poses the question of whether financial ratios as criteria for stock selection are of any use to Croatian investors. Financial and market data from selected publicly companies listed on the Croatian capital market are used. A two-stage portfolio selection strategy is applied, where the first stage involves selecting stocks based on the respective Data Envelopment Analysis (DEA efficiency scores. DEA models are becoming popular in stock portfolio selection given that the methodology includes numerous models that provide a great flexibility in selecting inputs and outputs, which in turn are considered as criteria for portfolio selection. Accordingly, there is much room for improvement of the current proposed strategies for selecting portfolios. In the second stage, two portfolio-weighting strategies are applied using equal proportions and score-weighting. To show whether these strategies create outstanding out–of–sample portfolios in time, time-dependent DEA Window Analysis is applied using a reference time of one year, and portfolio returns are compared with the market portfolio for each period. It is found that the financial data are a significant indicator of the future performance of a stock and a DEA-based portfolio strategy outperforms market return.

  6. System-Level Design Methodologies for Networked Multiprocessor Systems-on-Chip

    DEFF Research Database (Denmark)

    Virk, Kashif Munir

    2008-01-01

    is the first such attempt in the published literature. The second part of the thesis deals with the issues related to the development of system-level design methodologies for networked multiprocessor systems-on-chip at various levels of design abstraction with special focus on the modeling and design...... at the system-level. The multiprocessor modeling framework is then extended to include models of networked multiprocessor systems-on-chip which is then employed to model wireless sensor networks both at the sensor node level as well as the wireless network level. In the third and the final part, the thesis...... to the transaction-level model. The thesis, as a whole makes contributions by describing a design methodology for networked multiprocessor embedded systems at three layers of abstraction from system-level through transaction-level to the cycle accurate level as well as demonstrating it practically by implementing...

  7. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  8. A methodological framework applied to the choice of the best method in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2009-01-01

    The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)

  9. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bri Rolston

    2005-06-01

    Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

  10. Summary of discrete fracture network modelling as applied to hydrogeology of the Forsmark and Laxemar sites

    International Nuclear Information System (INIS)

    Hartley, Lee; Roberts, David

    2013-04-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is responsible for the development of a deep geological repository for spent nuclear fuel. The permitting of such a repository is informed by assessment studies to estimate the risks of the disposal method. One of the potential risks involves the transport of radionuclides in groundwater from defective canisters in the repository to the accessible environment. The Swedish programme for geological disposal of spent nuclear fuel has involved undertaking detailed surface-based site characterisation studies at two different sites, Forsmark and Laxemar-Simpevarp. A key component of the hydrogeological modelling of these two sites has been the development of Discrete Fracture Network (DFN) concepts of groundwater flow through the fractures in the crystalline rocks present. A discrete fracture network model represents some of the characteristics of fractures explicitly, such as their, orientation, intensity, size, spatial distribution, shape and transmissivity. This report summarises how the discrete fracture network methodology has been applied to model groundwater flow and transport at Forsmark and Laxemar. The account has involved summarising reports previously published by SKB between 2001 and 2011. The report describes the conceptual framework and assumptions used in interpreting site data, and in particular how data has been used to calibrate the various parameters that define the discrete fracture network representation of bedrock hydrogeology against borehole geologic and hydraulic data. Steps taken to confirm whether the developed discrete fracture network models provide a description of regional-scale groundwater flow and solute transport consistent with wider hydraulic tests hydrochemical data from Forsmark and Laxemar are discussed. It illustrates the use of derived hydrogeological DFN models in the simulations of the temperate period hydrogeology that provided input to radionuclide transport

  11. Summary of discrete fracture network modelling as applied to hydrogeology of the Forsmark and Laxemar sites

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Roberts, David

    2013-04-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is responsible for the development of a deep geological repository for spent nuclear fuel. The permitting of such a repository is informed by assessment studies to estimate the risks of the disposal method. One of the potential risks involves the transport of radionuclides in groundwater from defective canisters in the repository to the accessible environment. The Swedish programme for geological disposal of spent nuclear fuel has involved undertaking detailed surface-based site characterisation studies at two different sites, Forsmark and Laxemar-Simpevarp. A key component of the hydrogeological modelling of these two sites has been the development of Discrete Fracture Network (DFN) concepts of groundwater flow through the fractures in the crystalline rocks present. A discrete fracture network model represents some of the characteristics of fractures explicitly, such as their, orientation, intensity, size, spatial distribution, shape and transmissivity. This report summarises how the discrete fracture network methodology has been applied to model groundwater flow and transport at Forsmark and Laxemar. The account has involved summarising reports previously published by SKB between 2001 and 2011. The report describes the conceptual framework and assumptions used in interpreting site data, and in particular how data has been used to calibrate the various parameters that define the discrete fracture network representation of bedrock hydrogeology against borehole geologic and hydraulic data. Steps taken to confirm whether the developed discrete fracture network models provide a description of regional-scale groundwater flow and solute transport consistent with wider hydraulic tests hydrochemical data from Forsmark and Laxemar are discussed. It illustrates the use of derived hydrogeological DFN models in the simulations of the temperate period hydrogeology that provided input to radionuclide transport

  12. Design Methodology of a Sensor Network Architecture Supporting Urgent Information and Its Evaluation

    Science.gov (United States)

    Kawai, Tetsuya; Wakamiya, Naoki; Murata, Masayuki

    Wireless sensor networks are expected to become an important social infrastructure which helps our life to be safe, secure, and comfortable. In this paper, we propose design methodology of an architecture for fast and reliable transmission of urgent information in wireless sensor networks. In this methodology, instead of establishing single complicated monolithic mechanism, several simple and fully-distributed control mechanisms which function in different spatial and temporal levels are incorporated on each node. These mechanisms work autonomously and independently responding to the surrounding situation. We also show an example of a network architecture designed following the methodology. We evaluated the performance of the architecture by extensive simulation and practical experiments and our claim was supported by the results of these experiments.

  13. Methodology for risk assessment and reliability applied for pipeline engineering design and industrial valves operation

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, Dierci [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Escola de Engenharia Industrial e Metalurgia. Lab. de Sistemas de Producao e Petroleo e Gas], e-mail: dsilveira@metal.eeimvr.uff.br; Batista, Fabiano [CICERO, Rio das Ostras, RJ (Brazil)

    2009-07-01

    Two kinds of situations may be distinguished for estimating the operating reliability when maneuvering industrial valves and the probability of undesired events in pipelines and industrial plants: situations in which the risk is identified in repetitive cycles of operations and situations in which there is a permanent hazard due to project configurations introduced by decisions during the engineering design definition stage. The estimation of reliability based on the influence of design options requires the choice of a numerical index, which may include a composite of human operating parameters based on biomechanics and ergonomics data. We first consider the design conditions under which the plant or pipeline operator reliability concepts can be applied when operating industrial valves, and then describe in details the ergonomics and biomechanics risks that would lend itself to engineering design database development and human reliability modeling and assessment. This engineering design database development and reliability modeling is based on a group of engineering design and biomechanics parameters likely to lead to over-exertion forces and working postures, which are themselves associated with the functioning of a particular plant or pipeline. This approach to construct based on ergonomics and biomechanics for a more common industrial valve positioning in the plant layout is proposed through the development of a methodology to assess physical efforts and operator reach, combining various elementary operations situations. These procedures can be combined with the genetic algorithm modeling and four elements of the man-machine systems: the individual, the task, the machinery and the environment. The proposed methodology should be viewed not as competing to traditional reliability and risk assessment bur rather as complementary, since it provides parameters related to physical efforts values for valves operation and workspace design and usability. (author)

  14. NeOn Methodology for Building Ontology Networks: Specification, Scheduling and Reuse

    OpenAIRE

    Suárez-Figueroa, Mari Carmen

    2010-01-01

    A new ontology development paradigm has started; its emphasis lies on the reuse and possible subsequent reengineering of knowledge resources, on the collaborative and argumentative ontology development, and on the building of ontology networks; this new trend is the opposite of building new ontologies from scratch. To help ontology developers in this new paradigm, it is important to provide strong methodological support. This thesis presents some contributions to the methodological area of...

  15. A Proven Methodology for Developing Secure Software and Applying It to Ground Systems

    Science.gov (United States)

    Bailey, Brandon

    2016-01-01

    Part Two expands upon Part One in an attempt to translate the methodology for ground system personnel. The goal is to build upon the methodology presented in Part One by showing examples and details on how to implement the methodology. Section 1: Ground Systems Overview; Section 2: Secure Software Development; Section 3: Defense in Depth for Ground Systems; Section 4: What Now?

  16. Technologies, Methodologies and Challenges in Network Intrusion Detection and Prevention Systems

    Directory of Open Access Journals (Sweden)

    Nicoleta STANCIU

    2013-01-01

    Full Text Available This paper presents an overview of the technologies and the methodologies used in Network Intrusion Detection and Prevention Systems (NIDPS. Intrusion Detection and Prevention System (IDPS technologies are differentiated by types of events that IDPSs can recognize, by types of devices that IDPSs monitor and by activity. NIDPSs monitor and analyze the streams of network packets in order to detect security incidents. The main methodology used by NIDPSs is protocol analysis. Protocol analysis requires good knowledge of the theory of the main protocols, their definition, how each protocol works.

  17. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  18. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  19. Applying Lean-Six-Sigma Methodology in radiotherapy: Lessons learned by the breast daily repositioning case.

    Science.gov (United States)

    Mancosu, Pietro; Nicolini, Giorgia; Goretti, Giulia; De Rose, Fiorenza; Franceschini, Davide; Ferrari, Chiara; Reggiori, Giacomo; Tomatis, Stefano; Scorsetti, Marta

    2018-03-06

    Lean Six Sigma Methodology (LSSM) was introduced in industry to provide near-perfect services to large processes, by reducing improbable occurrence. LSSM has been applied to redesign the 2D-2D breast repositioning process (Lean) by the retrospective analysis of the database (Six Sigma). Breast patients with daily 2D-2D matching before RT were considered. The five DMAIC (define, measure, analyze, improve, and control) LSSM steps were applied. The process was retrospectively measured over 30 months (7/2014-12/2016) by querying the RT Record&Verify database. Two Lean instruments (Poka-Yoke and Visual Management) were considered for advancing the process. The new procedure was checked over 6 months (1-6/2017). 14,931 consecutive shifts from 1342 patients were analyzed. Only 0.8% of patients presented median shifts >1 cm. The major observed discrepancy was the monthly percentage of fractions with almost zero shifts (AZS = 13.2% ± 6.1%). Ishikawa fishbone diagram helped in defining the main discrepancy con-causes. Procedure harmonization involving a multidisciplinary team to increase confidence in matching procedure was defined. AZS was reduced to 4.8% ± 0.6%. Furthermore, distribution symmetry improvement (Skewness moved from 1.4 to 1.1) and outlier reduction, verified by Kurtosis diminution, demonstrated a better "normalization" of the procedure after the LSSM application. LSSM was implemented in a RT department, allowing to redesign the breast repositioning matching procedure. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Rainfall prediction methodology with binary multilayer perceptron neural networks

    Science.gov (United States)

    Esteves, João Trevizoli; de Souza Rolim, Glauco; Ferraudo, Antonio Sergio

    2018-05-01

    Precipitation, in short periods of time, is a phenomenon associated with high levels of uncertainty and variability. Given its nature, traditional forecasting techniques are expensive and computationally demanding. This paper presents a soft computing technique to forecast the occurrence of rainfall in short ranges of time by artificial neural networks (ANNs) in accumulated periods from 3 to 7 days for each climatic season, mitigating the necessity of predicting its amount. With this premise it is intended to reduce the variance, rise the bias of data and lower the responsibility of the model acting as a filter for quantitative models by removing subsequent occurrences of zeros values of rainfall which leads to bias the and reduces its performance. The model were developed with time series from ten agriculturally relevant regions in Brazil, these places are the ones with the longest available weather time series and and more deficient in accurate climate predictions, it was available 60 years of daily mean air temperature and accumulated precipitation which were used to estimate the potential evapotranspiration and water balance; these were the variables used as inputs for the ANNs models. The mean accuracy of the model for all the accumulated periods were 78% on summer, 71% on winter 62% on spring and 56% on autumn, it was identified that the effect of continentality, the effect of altitude and the volume of normal precipitation, have an direct impact on the accuracy of the ANNs. The models have peak performance in well defined seasons, but looses its accuracy in transitional seasons and places under influence of macro-climatic and mesoclimatic effects, which indicates that this technique can be used to indicate the eminence of rainfall with some limitations.

  1. MAIA - Method for Architecture of Information Applied: methodological construct of information processing in complex contexts

    Directory of Open Access Journals (Sweden)

    Ismael de Moura Costa

    2017-04-01

    Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.

  2. Response surface methodology applied to the study of the microwave-assisted synthesis of quaternized chitosan.

    Science.gov (United States)

    dos Santos, Danilo Martins; Bukzem, Andrea de Lacerda; Campana-Filho, Sérgio Paulo

    2016-03-15

    A quaternized derivative of chitosan, namely N-(2-hydroxy)-propyl-3-trimethylammonium chitosan chloride (QCh), was synthesized by reacting glycidyltrimethylammonium chloride (GTMAC) and chitosan (Ch) in acid medium under microwave irradiation. Full-factorial 2(3) central composite design and response surface methodology (RSM) were applied to evaluate the effects of molar ratio GTMAC/Ch, reaction time and temperature on the reaction yield, average degree of quaternization (DQ) and intrinsic viscosity ([η]) of QCh. The molar ratio GTMAC/Ch was the most important factor affecting the response variables and RSM results showed that highly substituted QCh (DQ = 71.1%) was produced at high yield (164%) when the reaction was carried out for 30min. at 85°C by using molar ratio GTMAC/Ch 6/1. Results showed that microwave-assisted synthesis is much faster (≤30min.) as compared to conventional reaction procedures (>4h) carried out in similar conditions except for the use of microwave irradiation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Fast frequency hopping codes applied to SAC optical CDMA network

    Science.gov (United States)

    Tseng, Shin-Pin

    2015-06-01

    This study designed a fast frequency hopping (FFH) code family suitable for application in spectral-amplitude-coding (SAC) optical code-division multiple-access (CDMA) networks. The FFH code family can effectively suppress the effects of multiuser interference and had its origin in the frequency hopping code family. Additional codes were developed as secure codewords for enhancing the security of the network. In considering the system cost and flexibility, simple optical encoders/decoders using fiber Bragg gratings (FBGs) and a set of optical securers using two arrayed-waveguide grating (AWG) demultiplexers (DeMUXs) were also constructed. Based on a Gaussian approximation, expressions for evaluating the bit error rate (BER) and spectral efficiency (SE) of SAC optical CDMA networks are presented. The results indicated that the proposed SAC optical CDMA network exhibited favorable performance.

  4. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Jin Soo; Heo, Gyun Young [Kyung Hee University, Youngin (Korea, Republic of); Kang, Hyun Gook [KAIST, Dajeon (Korea, Republic of); Son, Han Seong [Joongbu University, Chubu (Korea, Republic of)

    2014-08-15

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility.

  5. Methodology for Applying Cyber Security Risk Evaluation from BN Model to PSA Model

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Heo, Gyun Young; Kang, Hyun Gook; Son, Han Seong

    2014-01-01

    There are several advantages to use digital equipment such as cost, convenience, and availability. It is inevitable to use the digital I and C equipment replaced analog. Nuclear facilities have already started applying the digital system to I and C system. However, the nuclear facilities also have to change I and C system even though it is difficult to use digital equipment due to high level of safety, irradiation embrittlement, and cyber security. A cyber security which is one of important concerns to use digital equipment can affect the whole integrity of nuclear facilities. For instance, cyber-attack occurred to nuclear facilities such as the SQL slammer worm, stuxnet, DUQU, and flame. The regulatory authorities have published many regulatory requirement documents such as U.S. NRC Regulatory Guide 5.71, 1.152, IAEA guide NSS-17, IEEE Standard, and KINS Regulatory Guide. One of the important problem of cyber security research for nuclear facilities is difficulty to obtain the data through the penetration experiments. Therefore, we make cyber security risk evaluation model with Bayesian network (BN) for nuclear reactor protection system (RPS), which is one of the safety-critical systems to trip the reactor when the accident is happened to the facilities. BN can be used for overcoming these problems. We propose a method to apply BN cyber security model to probabilistic safety assessment (PSA) model, which had been used for safety assessment of system, structure and components of facility. The proposed method will be able to provide the insight of safety as well as cyber risk to the facility

  6. A Methodology for a Sustainable CO2 Capture and Utilization Network

    DEFF Research Database (Denmark)

    Frauzem, Rebecca; Fjellerup, Kasper; Gani, Rafiqul

    2015-01-01

    hydrogenation highlights the application. This case study illustrates the utility of the utilization network and elements of the methodology being developed. In addition, the conversion process is linked with carbon capture to evaluate the overall sustainability. Finally, the production of the other raw...... of Climate Change. New York: Cambridge University Press, 2007. [2] J. Wilcox, Carbon Capture. New York: Springer, 2012....

  7. Leveraging the Methodological Affordances of Facebook: Social Networking Strategies in Longitudinal Writing Research

    Science.gov (United States)

    Sheffield, Jenna Pack; Kimme Hea, Amy C.

    2016-01-01

    While composition studies researchers have examined the ways social media are impacting our lives inside and outside of the classroom, less attention has been given to the ways in which social media--specifically Social Network Sites (SNSs)--may enhance our own research methods and methodologies by helping to combat research participant attrition…

  8. Flow regime identification methodology with MCNP-X code and artificial neural network

    International Nuclear Information System (INIS)

    Salgado, Cesar M.; Instituto de Engenharia Nuclear; Schirru, Roberto; Brandao, Luis E.B.; Pereira, Claudio M.N.A.

    2009-01-01

    This paper presents flow regimes identification methodology in multiphase system in annular, stratified and homogeneous oil-water-gas regimes. The principle is based on recognition of the pulse height distributions (PHD) from gamma-ray with supervised artificial neural network (ANN) systems. The detection geometry simulation comprises of two NaI(Tl) detectors and a dual-energy gamma-ray source. The measurement of scattered radiation enables the dual modality densitometry (DMD) measurement principle to be explored. Its basic principle is to combine the measurement of scattered and transmitted radiation in order to acquire information about the different flow regimes. The PHDs obtained by the detectors were used as input to ANN. The data sets required for training and testing the ANN were generated by the MCNP-X code from static and ideal theoretical models of multiphase systems. The ANN correctly identified the three different flow regimes for all data set evaluated. The results presented show that PHDs examined by ANN may be applied in the successfully flow regime identification. (author)

  9. Artificial neural networks applied to forecasting time series.

    Science.gov (United States)

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  10. Applying the AcciMap methodology to investigate the tragic Sewol Ferry accident in South Korea.

    Science.gov (United States)

    Lee, Samuel; Moh, Young Bo; Tabibzadeh, Maryam; Meshkati, Najmedin

    2017-03-01

    This study applies the AcciMap methodology, which was originally proposed by Professor Jens Rasmussen (1997), to the analysis of the tragic Sewol Ferry accident in South Korea on April 16, 2014, which killed 304 mostly young people and is considered as a national disaster in that country. This graphical representation, by incorporating associated socio-technical factors into an integrated framework, provides a big-picture to illustrate the context in which an accident occurred as well as the interactions between different levels of the studied system that resulted in that event. In general, analysis of past accidents within the stated framework can define the patterns of hazards within an industrial sector. Such analysis can lead to the definition of preconditions for safe operations, which is a main focus of proactive risk management systems. In the case of the Sewol Ferry accident, a lot of the blame has been placed on the Sewol's captain and its crewmembers. However, according to this study, which relied on analyzing all available sources published in English and Korean, the disaster is the result of a series of lapses and disregards for safety across different levels of government and regulatory bodies, Chonghaejin Company, and the Sewol's crewmembers. The primary layers of the AcciMap framework, which include the political environment and non-proactive governmental body; inadequate regulations and their lax oversight and enforcement; poor safety culture; inconsideration of human factors issues; and lack of and/or outdated standard operating and emergency procedures were not only limited to the maritime industry in South Korea, and the Sewol Ferry accident, but they could also subject any safety-sensitive industry anywhere in the world. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Applying neural networks as software sensors for enzyme engineering.

    Science.gov (United States)

    Linko, S; Zhu, Y H; Linko, P

    1999-04-01

    The on-line control of enzyme-production processes is difficult, owing to the uncertainties typical of biological systems and to the lack of suitable on-line sensors for key process variables. For example, intelligent methods to predict the end point of fermentation could be of great economic value. Computer-assisted control based on artificial-neural-network models offers a novel solution in such situations. Well-trained feedforward-backpropagation neural networks can be used as software sensors in enzyme-process control; their performance can be affected by a number of factors.

  12. Applying a rateless code in content delivery networks

    Science.gov (United States)

    Suherman; Zarlis, Muhammad; Parulian Sitorus, Sahat; Al-Akaidi, Marwan

    2017-09-01

    Content delivery network (CDN) allows internet providers to locate their services, to map their coverage into networks without necessarily to own them. CDN is part of the current internet infrastructures, supporting multi server applications especially social media. Various works have been proposed to improve CDN performances. Since accesses on social media servers tend to be short but frequent, providing redundant to the transmitted packets to ensure lost packets not degrade the information integrity may improve service performances. This paper examines the implementation of rateless code in the CDN infrastructure. The NS-2 evaluations show that rateless code is able to reduce packet loss up to 50%.

  13. New challenges and opportunities in the eddy-covariance methodology for long-term monitoring networks

    Science.gov (United States)

    Papale, Dario; Fratini, Gerardo

    2013-04-01

    Eddy-covariance is the most direct and most commonly applied methodology for measuring exchange fluxes of mass and energy between ecosystems and the atmosphere. In recent years, the number of environmental monitoring stations deploying eddy-covariance systems increased dramatically at the global level, exceeding 500 sites worldwide and covering most climatic and ecological regions. Several long-term environmental research infrastructures such as ICOS, NEON and AmeriFlux selected the eddy-covariance as a method to monitor GHG fluxes and are currently collaboratively working towards defining common measurements standards, data processing approaches, QA/QC procedures and uncertainty estimation strategies, to the aim of increasing defensibility of resulting fluxes and intra and inter-comparability of flux databases. In the meanwhile, the eddy-covariance research community keeps identifying technical and methodological flaws that, in some cases, can introduce - and can have introduced to date - significant biases in measured fluxes or increase their uncertainty. Among those, we identify three issues of presumably greater concern, namely: (1) strong underestimation of water vapour fluxes in closed-path systems, and its dependency on relative humidity; (2) flux biases induced by erroneous measurement of absolute gas concentrations; (3) and systematic errors due to underestimation of vertical wind variance in non-orthogonal anemometers. If not properly addressed, these issues can reduce the quality and reliability of the method, especially as a standard methodology in long-term monitoring networks. In this work, we review the status of the art regarding such problems, and propose new evidences based on field experiments as well as numerical simulations. Our analyses confirm the potential relevance of these issues but also hint at possible coping approaches, to minimize problems during setup design, data collection and post-field flux correction. Corrections are under

  14. Applying Real Options Thinking to Information Security in Networked Organizations

    NARCIS (Netherlands)

    Daneva, Maia

    2006-01-01

    An information security strategy of an organization participating in a networked business sets out the plans for designing a variety of actions that ensure confidentiality, availability, and integrity of company’s key information assets. The actions are concerned with authentication and

  15. Neural networks applied to the classification of remotely sensed data

    NARCIS (Netherlands)

    Mulder, Nanno; Spreeuwers, Lieuwe Jan

    1991-01-01

    A neural network with topology 2-8-8 is evaluated against the standard of supervised non-parametric maximum likelihood classification. The purpose of the evaluation is to compare the performance in terms of training speed and quality of classification. Classification is done on multispectral data

  16. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total ..... Genetic algorithm-based self-learning fuzzy PI controller for shunt active filter, ... Verification of global optimality of the OFC active power filters by means of ...

  17. NEW TECHNIQUES APPLIED IN ECONOMICS. ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Constantin Ilie

    2009-05-01

    Full Text Available The present paper has the objective to inform the public regarding the use of new techniques for the modeling, simulate and forecast of system from different field of activity. One of those techniques is Artificial Neural Network, one of the artificial in

  18. Prediction of fracture toughness temperature dependence applying neural network

    Czech Academy of Sciences Publication Activity Database

    Dlouhý, Ivo; Hadraba, Hynek; Chlup, Zdeněk; Šmída, T.

    2011-01-01

    Roč. 11, č. 1 (2011), s. 9-14 ISSN 1451-3749 R&D Projects: GA ČR(CZ) GAP108/10/0466 Institutional research plan: CEZ:AV0Z20410507 Keywords : brittle to ductile transition * fracture toughness * artificial neural network * steels Subject RIV: JL - Materials Fatigue, Friction Mechanics

  19. Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 3: how to assess methodological limitations.

    Science.gov (United States)

    Munthe-Kaas, Heather; Bohren, Meghan A; Glenton, Claire; Lewin, Simon; Noyes, Jane; Tunçalp, Özge; Booth, Andrew; Garside, Ruth; Colvin, Christopher J; Wainwright, Megan; Rashidian, Arash; Flottorp, Signe; Carlsen, Benedicte

    2018-01-25

    The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's methodological limitations component. We developed the methodological limitations component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual methodological limitations component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define methodological limitations as the extent to which there are concerns about the design or conduct of the primary studies that contributed evidence to an individual review finding. In this paper, we describe the methodological limitations component and its rationale and offer guidance on how to assess methodological limitations of a review finding as part of the CERQual approach. This guidance outlines the information required to assess methodological limitations component, the steps that need to be taken to assess methodological limitations of data contributing to a review finding and examples of methodological limitation assessments. This paper provides guidance for review authors and others on undertaking an assessment of methodological limitations in the context of the CERQual

  20. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    International Nuclear Information System (INIS)

    Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E

    2015-01-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3 He-filled tube is obtained by using this methodology with respect to previous calibration methods. (paper)

  1. Network meta-analysis-highly attractive but more methodological research is needed

    Directory of Open Access Journals (Sweden)

    Singh Sonal

    2011-06-01

    Full Text Available Abstract Network meta-analysis, in the context of a systematic review, is a meta-analysis in which multiple treatments (that is, three or more are being compared using both direct comparisons of interventions within randomized controlled trials and indirect comparisons across trials based on a common comparator. To ensure validity of findings from network meta-analyses, the systematic review must be designed rigorously and conducted carefully. Aspects of designing and conducting a systematic review for network meta-analysis include defining the review question, specifying eligibility criteria, searching for and selecting studies, assessing risk of bias and quality of evidence, conducting a network meta-analysis, interpreting and reporting findings. This commentary summarizes the methodologic challenges and research opportunities for network meta-analysis relevant to each aspect of the systematic review process based on discussions at a network meta-analysis methodology meeting we hosted in May 2010 at the Johns Hopkins Bloomberg School of Public Health. Since this commentary reflects the discussion at that meeting, it is not intended to provide an overview of the field.

  2. Methodology for evaluation of alternative technologies applied to nuclear fuel reprocessing

    International Nuclear Information System (INIS)

    Selvaduray, G.S.; Goldstein, M.K.; Anderson, R.N.

    1977-07-01

    An analytic methodology has been developed to compare the performance of various nuclear fuel reprocessing techniques for advanced fuel cycle applications including low proliferation risk systems. The need to identify and to compare those processes, which have the versatility to handle the variety of fuel types expected to be in use in the next century, is becoming increasingly imperative. This methodology allows processes in any stage of development to be compared and to assess the effect of changing external conditions on the process

  3. Boarding Team Networking on the Move: Applying Unattended Relay Nodes

    Science.gov (United States)

    2014-09-01

    Operations MPU Man Portable Unit NATO North Atlantic Treaty Organization NOC Network Operations Center OSI Open Source Initiative RHIB Rigid Hull...portion of world trade, while by economic value containerized goods form the largest portion [3]. Although economical value containers are most often...back of technological and logistical obstacles to inspect all containers, it is neither feasible nor economical to inspect every container entering

  4. Propagation Analysis for Wireless Sensor Networks Applied to Viticulture

    OpenAIRE

    Correia, Felipe Pinheiro; Alencar, Marcelo Sampaio de; Lopes, Waslon Terllizzie Araújo; Assis, Mauro Soares de; Leal, Brauliro Gonçalves

    2017-01-01

    Wireless sensor networks have been proposed as a solution to obtain soil and environment information in large distributed areas. The main economic activity of the São Francisco Valley region in the Northeast of Brazil is the irrigated fruit production. The region is one of the major agricultural regions of the country. Grape plantations receive large investments and provide good financial return. However, the region still lacks electronic sensing systems to extract adequate information from p...

  5. Applied and computational harmonic analysis on graphs and networks

    Science.gov (United States)

    Irion, Jeff; Saito, Naoki

    2015-09-01

    In recent years, the advent of new sensor technologies and social network infrastructure has provided huge opportunities and challenges for analyzing data recorded on such networks. In the case of data on regular lattices, computational harmonic analysis tools such as the Fourier and wavelet transforms have well-developed theories and proven track records of success. It is therefore quite important to extend such tools from the classical setting of regular lattices to the more general setting of graphs and networks. In this article, we first review basics of graph Laplacian matrices, whose eigenpairs are often interpreted as the frequencies and the Fourier basis vectors on a given graph. We point out, however, that such an interpretation is misleading unless the underlying graph is either an unweighted path or cycle. We then discuss our recent effort of constructing multiscale basis dictionaries on a graph, including the Hierarchical Graph Laplacian Eigenbasis Dictionary and the Generalized Haar-Walsh Wavelet Packet Dictionary, which are viewed as generalizations of the classical hierarchical block DCTs and the Haar-Walsh wavelet packets, respectively, to the graph setting. Finally, we demonstrate the usefulness of our dictionaries by using them to simultaneously segment and denoise 1-D noisy signals sampled on regular lattices, a problem where classical tools have difficulty.

  6. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  7. Optimization of the ethanolysis of Raphanus sativus (L. Var.) crude oil applying the response surface methodology.

    Science.gov (United States)

    Domingos, Anderson Kurunczi; Saad, Emir Bolzani; Wilhelm, Helena Maria; Ramos, Luiz Pereira

    2008-04-01

    Raphanus sativus (L. Var) is a perennial plant of the Brassicaceae (or Cruciferae) family whose oil has not been investigated in detail for biodiesel production, particularly when ethanol is used as the alcoholysis agent. In this work, response surface methodology (RSM) was used to determine the optimum condition for the ethanolysis of R. sativus crude oil. Three process variables were evaluated at two levels (2(3) experimental design): the ethanol:oil molar ratio (6:1 and 12:1), the catalyst concentration in relation to oil mass (0.4 and 0.8 wt% NaOH) and the alcoholysis temperature (45 and 65 degrees C). When the experimental results were tentatively adjusted by linear regression, only 58.15% of its total variance was explained. Therefore, a quadratic model was investigated to improve the poor predictability of the linear model. To apply the quadratic model, the 2(3) experimental design had to be expanded to a circumscribed central composite design. This allowed the development of a response surface that was able to predict 97.75% of the total variance of the system. Validation was obtained by performing one ethanolysis experiment at the conditions predicted by the model (38 degrees C, ethanol:oil molar ratio of 11.7:1 and 0.6 wt% NaOH). The resulting ester yield (104.10 wt% or 99.10% of the theoretical yield of 105.04 wt%) was shown to be the highest among all conditions tested in this study. The second ethanolysis stage of the best RSM product required 50% less ethanol and 90% less catalyst consumption. The amount of ethyl esters obtained after this procedure reached 94.5% of the theoretical yield. The resulting ethyl esters were shown to comply with most of the Brazilian biodiesel specification parameters except for oxidation stability. Addition of 500 ppm of BHT to the esters, however, complied with the specification target of 6h. The application of 2 wt% Magnesol after the second ethanolysis stage eliminated the need for water washing and helped generate a

  8. Applied Knowledge Management to Mitigate Cognitive Load in Network-Enabled Mission Command

    Science.gov (United States)

    2017-11-22

    ARL-TN-0859 ● NOV 2017 US Army Research Laboratory Applied Knowledge Management to Mitigate Cognitive Load in Network-Enabled...Applied Knowledge Management to Mitigate Cognitive Load in Network-Enabled Mission Command by John K Hawley Human Research and Engineering...REPORT TYPE Technical Note 3. DATES COVERED (From - To) 1 May 2016–20 April 2017 4. TITLE AND SUBTITLE Applied Knowledge Management to Mitigate

  9. Neural networks applied to discriminate botanical origin of honeys.

    Science.gov (United States)

    Anjos, Ofélia; Iglesias, Carla; Peres, Fátima; Martínez, Javier; García, Ángela; Taboada, Javier

    2015-05-15

    The aim of this work is develop a tool based on neural networks to predict the botanical origin of honeys using physical and chemical parameters. The managed database consists of 49 honey samples of 2 different classes: monofloral (almond, holm oak, sweet chestnut, eucalyptus, orange, rosemary, lavender, strawberry trees, thyme, heather, sunflower) and multifloral. The moisture content, electrical conductivity, water activity, ashes content, pH, free acidity, colorimetric coordinates in CIELAB space (L(∗), a(∗), b(∗)) and total phenols content of the honey samples were evaluated. Those properties were considered as input variables of the predictive model. The neural network is optimised through several tests with different numbers of neurons in the hidden layer and also with different input variables. The reduced error rates (5%) allow us to conclude that the botanical origin of honey can be reliably and quickly known from the colorimetric information and the electrical conductivity of honey. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  11. ECU@Risk, a methodology for risk management applied to MSMEs

    Directory of Open Access Journals (Sweden)

    Esteban Crespo Martínez

    2017-02-01

    Full Text Available Information is the most valuable element for any organization or person in this new century, which, for many companies, is a competitive advantage asset (Vásquez & Gabalán, 2015. However, despite the lack of knowledge about how to protect it properly or the complexity of international standards that indicate procedures to achieve an adequate level of protection, many organizations, especially the MSMEs sector, fails to achieve this goal.Therefore, this study proposes a methodology for information security risk management, which is applicable to the business and organizational environment of the Ecuadorian MSME sector. For this purpose, we analyze several methodologies as Magerit, CRAMM (CCTA Risk Analysis and Management Method, OCTAVE-S, Microsoft Risk Guide, COBIT 5 COSO III. These methodologies are internationally used in risk management of information; in the light of the frameworks of the industry: ISO 27001, 27002, 27005 and 31000.

  12. Least squares methodology applied to LWR-PV damage dosimetry, experience and expectations

    International Nuclear Information System (INIS)

    Wagschal, J.J.; Broadhead, B.L.; Maerker, R.E.

    1979-01-01

    The development of an advanced methodology for Light Water Reactors (LWR) Pressure Vessel (PV) damage dosimetry applications is the subject of an ongoing EPRI-sponsored research project at ORNL. This methodology includes a generalized least squares approach to a combination of data. The data include measured foil activations, evaluated cross sections and calculated fluxes. The uncertainties associated with the data as well as with the calculational methods are an essential component of this methodology. Activation measurements in two NBS benchmark neutron fields ( 252 Cf ISNF) and in a prototypic reactor field (Oak Ridge Pool Critical Assembly - PCA) are being analyzed using a generalized least squares method. The sensitivity of the results to the representation of the uncertainties (covariances) was carefully checked. Cross element covariances were found to be of utmost importance

  13. Radiochemical methodologies applied to analytical characterization of low and intermediate level wastes from nuclear power plants

    International Nuclear Information System (INIS)

    Monteiro, Roberto Pellacani G.; Júnior, Aluísio Souza R.; Kastner, Geraldo F.; Temba, Eliane S.C.; Oliveira, Thiago C. de; Amaral, Ângela M.; Franco, Milton B.

    2017-01-01

    The aim of this work is to present radiochemical methodologies developed at CDTN/CNEN in order to answer a program for isotopic inventory of radioactive wastes from Brazilian Nuclear Power Plants. In this program some radionuclides, 3 H, 14 C, 55 Fe, 59 Ni, 63 Ni, 90 Sr, 93 Zr, 94 Nb, 99 Tc, 129 I, 235 U, 238 U, 238 Pu, 239 + 240 Pu, 241 Pu, 242 Pu, 241 Am, 242 Cm e 243 + 244 Cm, were determined in Low Level Wastes (LLW) and Intermediate Level Wastes (ILW) and a protocol of analytical methodologies based on radiochemical separation steps and spectrometric and nuclear techniques was established. (author)

  14. Review of Artificial Neural Networks (ANN) applied to corrosion monitoring

    International Nuclear Information System (INIS)

    Mabbutt, S; Picton, P; Shaw, P; Black, S

    2012-01-01

    The assessment of corrosion within an engineering system often forms an important aspect of condition monitoring but it is a parameter that is inherently difficult to measure and predict. The electrochemical nature of the corrosion process allows precise measurements to be made. Advances in instruments, techniques and software have resulted in devices that can gather data and perform various analysis routines that provide parameters to identify corrosion type and corrosion rate. Although corrosion rates are important they are only useful where general or uniform corrosion dominates. However, pitting, inter-granular corrosion and environmentally assisted cracking (stress corrosion) are examples of corrosion mechanisms that can be dangerous and virtually invisible to the naked eye. Electrochemical noise (EN) monitoring is a very useful technique for detecting these types of corrosion and it is the only non-invasive electrochemical corrosion monitoring technique commonly available. Modern instrumentation is extremely sensitive to changes in the system and new experimental configurations for gathering EN data have been proven. In this paper the identification of localised corrosion by different data analysis routines has been reviewed. In particular the application of Artificial Neural Network (ANN) analysis to corrosion data is of key interest. In most instances data needs to be used with conventional theory to obtain meaningful information and relies on expert interpretation. Recently work has been carried out using artificial neural networks to investigate various types of corrosion data in attempts to predict corrosion behaviour with some success. This work aims to extend this earlier work to identify reliable electrochemical indicators of localised corrosion onset and propagation stages.

  15. Neural network applied to elemental archaeological Marajoara ceramic compositions

    International Nuclear Information System (INIS)

    Toyota, Rosimeiri G.; Munita, Casimiro S.; Boscarioli, Clodis

    2009-01-01

    In the last decades several analytical techniques have been used in archaeological ceramics studies. However, instrumental neutron activation analysis, INAA, employing gamma-ray spectrometry seems to be the most suitable technique because it is a simple analytical method in its purely instrumental form. The purpose of this work was to determine the concentration of Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, U, Yb, and Zn in 160 original marajoara ceramic fragments by INAA. Marajoara ceramics culture was sophisticated and well developed. This culture reached its peak during the V and XIV centuries in Marajo Island located on the Amazon River delta area in Brazil. The purpose of the quantitative data was to identify compositionally homogeneous groups within the database. Having this in mind, the data set was first converted to base-10 logarithms to compensate for the differences in magnitude between major elements and trace elements, and also to yield a closer to normal distribution for several trace elements. After that, the data were analyzed using the Mahalanobis distance and using the lambda Wilks as critical value to identify the outliers. The similarities among the samples were studied by means of cluster analysis, principal components analysis and discriminant analysis. Additional confirmation of these groups was made by using elemental concentration bivariate plots. The results showed that there were two very well defined groups in the data set. In addition, the database was studied using artificial neural network with unsupervised learning strategy known as self-organizing maps to classify the marajoara ceramics. The experiments carried out showed that self-organizing maps artificial neural network is capable of discriminating ceramic fragments like multivariate statistical methods, and, again the results showed that the database was formed by two groups. (author)

  16. A Methodology to Develop Entrepreneurial Networks: The Tech Ecosystem of Six African Cities

    Science.gov (United States)

    2014-11-01

    Information Center. Greve, A. and Salaff, J. W. (2003), Social Networks and Entrepreneurship . Entrepreneurship Theory and Practice , 28: 1–22. doi...our methodology, the team quickly realized that it would have to focus on a fairly narrow sub-set of entrepreneurship . Based on relationships we have...Social Capital: A Theory of Structure and Action. Cambridge University Press, New York 2001. Liu, Y., Slotine, J., and Barabasi, A. (2011

  17. Spatial vulnerability assessment : methodology for the community and district level applied to floods in Buzi, Mozambique

    International Nuclear Information System (INIS)

    Kienberger, S.

    2010-01-01

    Within this thesis a conceptual model is presented which allows for the definition of a vulnerability assessment according to its time and spatial scale and within a multi-dimensional framework, which should help to design and develop appropriate methodologies and adaptation of concepts for the required scale of implementation. Building on past experiences with participatory approaches in community mapping in the District of Buzi in Mozambique, the relevance of such approaches for a community-based disaster risk reduction framework is analysed. Finally, methodologies are introduced which allow the assessment of vulnerability and the prioritisation of vulnerability factors at the community level. At the district level, homogenous vulnerability regions are identified through the application of integrated modelling approaches which build on expert knowledge and weightings. A set of indicators is proposed, which allow the modelling of vulnerability in a data-scarce environment. In developing these different methodologies for the community and district levels, it has been identified that the monitoring of vulnerability and the identification of trends is essential to addressing the objective of a continuous and improved disaster risk management. In addition to the technical and methodological challenges discussed in this thesis, the commitment from different stakeholders and the availability of capacity in different domains is essential for the successful, practical implementation of the developed approaches. (author)

  18. Voltage regulation in MV networks with dispersed generations by a neural-based multiobjective methodology

    Energy Technology Data Exchange (ETDEWEB)

    Galdi, Vincenzo [Dipartimento di Ingegneria dell' Informazione e Ingegneria Elettrica, Universita degli studi di Salerno, Via Ponte Don Melillo 1, 84084 Fisciano (Italy); Vaccaro, Alfredo; Villacci, Domenico [Dipartimento di Ingegneria, Universita degli Studi del Sannio, Piazza Roma 21, 82100 Benevento (Italy)

    2008-05-15

    This paper puts forward the role of learning techniques in addressing the problem of an efficient and optimal centralized voltage control in distribution networks equipped with dispersed generation systems (DGSs). The proposed methodology employs a radial basis function network (RBFN) to identify the multidimensional nonlinear mapping between a vector of observable variables describing the network operating point and the optimal set points of the voltage regulating devices. The RBFN is trained by numerical data generated by solving the voltage regulation problem for a set of network operating points by a rigorous multiobjective solution methodology. The RBFN performance is continuously monitored by a supervisor process that notifies when there is the need of a more accurate solution of the voltage regulation problem if nonoptimal network operating conditions (ex post monitoring) or excessive distances between the actual network state and the neuron's centres (ex ante monitoring) are detected. A more rigorous problem solution, if required, can be obtained by solving the voltage regulation problem by a conventional multiobjective optimization technique. This new solution, in conjunction with the corresponding input vector, is then adopted as a new train data sample to adapt the RBFN. This online training process allows RBFN to (i) adaptively learn the more representative domain space regions of the input/output mapping without needing a prior knowledge of a complete and representative training set, and (ii) manage effectively any time varying phenomena affecting this mapping. The results obtained by simulating the regulation policy in the case of a medium-voltage network are very promising. (author)

  19. A Novel Water Supply Network Sectorization Methodology Based on a Complete Economic Analysis, Including Uncertainties

    Directory of Open Access Journals (Sweden)

    Enrique Campbell

    2016-04-01

    Full Text Available The core idea behind sectorization of Water Supply Networks (WSNs is to establish areas partially isolated from the rest of the network to improve operational control. Besides the benefits associated with sectorization, some drawbacks must be taken into consideration by water operators: the economic investment associated with both boundary valves and flowmeters and the reduction of both pressure and system resilience. The target of sectorization is to properly balance these negative and positive aspects. Sectorization methodologies addressing the economic aspects mainly consider costs of valves and flowmeters and of energy, and the benefits in terms of water saving linked to pressure reduction. However, sectorization entails other benefits, such as the reduction of domestic consumption, the reduction of burst frequency and the enhanced capacity to detect and intervene over future leakage events. We implement a development proposed by the International Water Association (IWA to estimate the aforementioned benefits. Such a development is integrated in a novel sectorization methodology based on a social network community detection algorithm, combined with a genetic algorithm optimization method and Monte Carlo simulation. The methodology is implemented over a fraction of the WSN of Managua city, capital of Nicaragua, generating a net benefit of 25,572 $/year.

  20. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  1. Bayesian networks applied to process diagnostics. Applications in energy industry

    Energy Technology Data Exchange (ETDEWEB)

    Widarsson, Bjoern (ed.); Karlsson, Christer; Dahlquist, Erik [Maelardalen Univ., Vaesteraas (Sweden); Nielsen, Thomas D.; Jensen, Finn V. [Aalborg Univ. (Denmark)

    2004-10-01

    Uncertainty in process operation occurs frequently in heat and power industry. This makes it hard to find the occurrence of an abnormal process state from a number of process signals (measurements) or find the correct cause to an abnormality. Among several other methods, Bayesian Networks (BN) is a method to build a model which can handle uncertainty in both process signals and the process itself. The purpose of this project is to investigate the possibilities to use BN for fault detection and diagnostics in combined heat and power industries through execution of two different applications. Participants from Aalborg University represent the knowledge of BN and participants from Maelardalen University have the experience from modelling heat and power applications. The co-operation also includes two energy companies; Elsam A/S (Nordjyllandsverket) and Maelarenergi AB (Vaesteraas CHP-plant), where the two applications are made with support from the plant personnel. The project ended out in two quite different applications. At Nordjyllandsverket, an application based (due to the lack of process knowledge) on pure operation data is build with capability to detect an abnormal process state in a coal mill. Detection is made through a conflict analysis when entering process signals into a model built by analysing the operation database. The application at Maelarenergi is built with a combination of process knowledge and operation data and can detect various faults caused by the fuel. The process knowledge is used to build a causal network structure and the structure is then trained by data from the operation database. Both applications are made as off-online applications, but they are ready for being run on-line. The performance of fault detection and diagnostics are good, but a lack of abnormal process states with known cause reduces the evaluation possibilities. Advantages with combining expert knowledge of the process with operation data are the possibility to represent

  2. A bio-inspired methodology of identifying influential nodes in complex networks.

    Directory of Open Access Journals (Sweden)

    Cai Gao

    Full Text Available How to identify influential nodes is a key issue in complex networks. The degree centrality is simple, but is incapable to reflect the global characteristics of networks. Betweenness centrality and closeness centrality do not consider the location of nodes in the networks, and semi-local centrality, leaderRank and pageRank approaches can be only applied in unweighted networks. In this paper, a bio-inspired centrality measure model is proposed, which combines the Physarum centrality with the K-shell index obtained by K-shell decomposition analysis, to identify influential nodes in weighted networks. Then, we use the Susceptible-Infected (SI model to evaluate the performance. Examples and applications are given to demonstrate the adaptivity and efficiency of the proposed method. In addition, the results are compared with existing methods.

  3. Propagation Analysis for Wireless Sensor Networks Applied to Viticulture

    Directory of Open Access Journals (Sweden)

    Felipe Pinheiro Correia

    2017-01-01

    Full Text Available Wireless sensor networks have been proposed as a solution to obtain soil and environment information in large distributed areas. The main economic activity of the São Francisco Valley region in the Northeast of Brazil is the irrigated fruit production. The region is one of the major agricultural regions of the country. Grape plantations receive large investments and provide good financial return. However, the region still lacks electronic sensing systems to extract adequate information from plantations. Considering these facts, this paper presents a study of path loss in grape plantations for a 2.4 GHz operating frequency. In order to determine the position of the sensor nodes, the research dealt with various environmental factors that influence the intensity of the received signal. It has been noticed that main plantation aisles favor the guided propagation, and the vegetation along the secondary plantation aisles compromises the propagation. Diffraction over the grape trees is the main propagation mechanism in the diagonal propagation path. Transmission carried out above the vineyard showed that reflection on the top of the trees is the main mechanism.

  4. Applying deep neural networks to HEP job classification

    International Nuclear Information System (INIS)

    Wang, L; Shi, J; Yan, X

    2015-01-01

    The cluster of IHEP computing center is a middle-sized computing system which provides 10 thousands CPU cores, 5 PB disk storage, and 40 GB/s IO throughput. Its 1000+ users come from a variety of HEP experiments. In such a system, job classification is an indispensable task. Although experienced administrator can classify a HEP job by its IO pattern, it is unpractical to classify millions of jobs manually. We present how to solve this problem with deep neural networks in a supervised learning way. Firstly, we built a training data set of 320K samples by an IO pattern collection agent and a semi-automatic process of sample labelling. Then we implemented and trained DNNs models with Torch. During the process of model training, several meta-parameters was tuned with cross-validations. Test results show that a 5- hidden-layer DNNs model achieves 96% precision on the classification task. By comparison, it outperforms a linear model by 8% precision. (paper)

  5. Methodology and boundary conditions applied to the analysis on internal flooding for Kozloduy NPP units 5 and 6

    International Nuclear Information System (INIS)

    Demireva, E.; Goranov, S.; Horstmann, R.

    2004-01-01

    Within the Modernization Program of Units 5 and 6 of Kozloduy NPP a comprehensive analysis of internal flooding has been carried out for the reactor building outside the containment and for the turbine hall by FRAMATOME ANP and ENPRO Consult. The objective of this presentation is to provide information on the applied methodology and boundary conditions. A separate report called 'Methodology and boundary conditions' has been elaborated in order to provide the fundament for the study. The methodology report provides definitions and advice for the following topics: scope of the study; safety objectives; basic assumptions and postulates (plant conditions, grace periods for manual actions, single failure postulate, etc.); sources of flooding (postulated piping leaks and ruptures, malfunctions and personnel error); main activities of the flooding analysis; study conclusions and suggestions of remedial measures. (authors)

  6. Modeling of Throughput in Production Lines Using Response Surface Methodology and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Federico Nuñez-Piña

    2018-01-01

    Full Text Available The problem of assigning buffers in a production line to obtain an optimum production rate is a combinatorial problem of type NP-Hard and it is known as Buffer Allocation Problem. It is of great importance for designers of production systems due to the costs involved in terms of space requirements. In this work, the relationship among the number of buffer slots, the number of work stations, and the production rate is studied. Response surface methodology and artificial neural network were used to develop predictive models to find optimal throughput values. 360 production rate values for different number of buffer slots and workstations were used to obtain a fourth-order mathematical model and four hidden layers’ artificial neural network. Both models have a good performance in predicting the throughput, although the artificial neural network model shows a better fit (R=1.0000 against the response surface methodology (R=0.9996. Moreover, the artificial neural network produces better predictions for data not utilized in the models construction. Finally, this study can be used as a guide to forecast the maximum or near maximum throughput of production lines taking into account the buffer size and the number of machines in the line.

  7. Radiochemical methodologies applied to analytical characterization of low and intermediate level wastes from nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Monteiro, Roberto Pellacani G.; Júnior, Aluísio Souza R.; Kastner, Geraldo F.; Temba, Eliane S.C.; Oliveira, Thiago C. de; Amaral, Ângela M.; Franco, Milton B., E-mail: rpgm@cdtn.br, E-mail: reisas@cdtn.br, E-mail: gfk@cdtn.br, E-mail: esct@cdtn.br, E-mail: tco@cdtn.br, E-mail: ama@cdtn.br, E-mail: francom@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    The aim of this work is to present radiochemical methodologies developed at CDTN/CNEN in order to answer a program for isotopic inventory of radioactive wastes from Brazilian Nuclear Power Plants. In this program some radionuclides, {sup 3}H, {sup 14}C, {sup 55}Fe, {sup 59}Ni, {sup 63}Ni, {sup 90}Sr, {sup 93}Zr, {sup 94}Nb, {sup 99}Tc, {sup 129}I, {sup 235}U, {sup 238}U, {sup 238}Pu, {sup 239}+{sup 240}Pu, {sup 241}Pu, {sup 242}Pu, {sup 241}Am, {sup 242}Cm e {sup 243}+{sup 244}Cm, were determined in Low Level Wastes (LLW) and Intermediate Level Wastes (ILW) and a protocol of analytical methodologies based on radiochemical separation steps and spectrometric and nuclear techniques was established. (author)

  8. Proposal for an Experimental Methodology for Evaluation of Natural Lighting Systems Applied in Buildings

    Directory of Open Access Journals (Sweden)

    Anderson Diogo Spacek

    2017-07-01

    Full Text Available This work has the objective of developing a methodology for the evaluation of indoor natural lighting systems, which, with speed and practicality, provides from real conditions of use a reliable result about the quality and performance of the proposed system. The methodology is based on the construction of two real-size test environments, which will be subjected to a natural light system through reflexive tubes made from recycled material, and to a commercial system already certified and consolidated, creating the possibility of comparison. Furthermore, the data acquired in the test environments will be examined in light of the values of solar radiation obtained from a digital meteorological station, such that it is possible to stipulate the lighting capacity of the systems at different times of the year.

  9. Analysis Planning Methodology: For Thesis, Joint Applied Project, & MBA Research Reports

    OpenAIRE

    Naegle, Brad R.

    2010-01-01

    Acquisition Research Handbook Series Purpose: This guide provides the graduate student researcher—you—with techniques and advice on creating an effective analysis plan, and it provides methods for focusing the data-collection effort based on that analysis plan. As a side benefit, this analysis planning methodology will help you to properly scope the research effort and will provide you with insight for changes in that effort. The information presented herein was supported b...

  10. Methodology supporting production control in a foundry applying modern DISAMATIC molding line

    Directory of Open Access Journals (Sweden)

    Sika Robert

    2017-01-01

    Full Text Available The paper presents methodology of production control using statistical methods in foundry conditions, using the automatic DISAMATIC molding line. The authors were inspired by many years of experience in implementing IT tools for foundries. The authors noticed that there is a lack of basic IT tools dedicated to specific casting processes, that would greatly facilitate their oversight and thus improve the quality of manufactured products. More and more systems are installed in the ERP or CAx area, but they integrate processes only partially, mainly in the area of technology design and business management from finance and control. Monitoring of foundry processes can generate a large amount of process-related data. This is particularly noticeable in automated processes. An example is the modern DISAMATIC molding line, which integrates several casting processes, such as mold preparation, assembly, pouring or shake out. The authors proposed a methodology that supports the control of the above-mentioned foundry processes using statistical methods. Such an approach can be successfully used, for example, during periodic external audits. The mentioned methodology in the innovative DISAM-ProdC computer tool was implemented.

  11. E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence

    Science.gov (United States)

    2018-03-01

    act of violence is not supported with other predicate crimes such as money laundering , arms trafficking, possession of banned substances, or other...apparatuses, do not rely on elaborate support networks or detectable money trails, and often select targets that are difficult to anticipate and defend. In

  12. METHODOLOGY FOR FORMING MUTUALLY BENEFICIAL NETWORK INTERACTION BETWEEN SMALL CITIES AND DISTRICT CENTRES

    Directory of Open Access Journals (Sweden)

    Nikolay A. Ivanov

    2017-01-01

    Full Text Available Abstract. Objectives The aim of the study is to develop a methodology for networking between small towns and regional centres on the basis of developing areas of mutual benefit. It is important to assess the possibility of cooperation between small towns and regional centres and local selfgovernment bodies on the example of individual territorial entities of Russia in the context of the formation and strengthening of networks and support for territorial development. Methods Systemic and functional methodical approaches were taken. The modelling of socio-economic processes provides a visual representation of the direction of positive changes for small towns and regional centres of selected Subjects of the Russian Federation. Results Specific examples of cooperation between small towns and district centres are revealed in some areas; these include education, trade and public catering, tourist and recreational activities. The supporting role of subsystems, including management, regulatory activity, transport and logistics, is described. Schemes, by to which mutually beneficial network interaction is formed, are characterised in terms of the specific advantages accruing to each network subject. Economic benefits of realising interaction between small cities and regional centres are discussed. The methodology is based on assessing the access of cities to commutation, on which basis contemporary regional and city networks are formed. Conclusion On the basis of the conducted study, a list of areas for mutually beneficial networking between small towns and district centres has been identified, allowing the appropriate changes in regional economic policies to be effected in terms of programmes aimed at the development of regions and small towns, including those suffering from economic depression.

  13. Development of Geometry Optimization Methodology with In-house CFD code, and Challenge in Applying to Fuel Assembly

    International Nuclear Information System (INIS)

    Jeong, J. H.; Lee, K. L.

    2016-01-01

    The wire spacer has important roles to avoid collisions between adjacent rods, to mitigate a vortex induced vibration, and to enhance convective heat transfer by wire spacer induced secondary flow. Many experimental and numerical works has been conducted to understand the thermal-hydraulics of the wire-wrapped fuel bundles. There has been enormous growth in computing capability. Recently, a huge increase of computer power allows to three-dimensional simulation of thermal-hydraulics of wire-wrapped fuel bundles. In this study, the geometry optimization methodology with RANS based in-house CFD (Computational Fluid Dynamics) code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI (General Grid Interface) function is developed for in-house CFD code. Furthermore, three-dimensional flow fields calculated with in-house CFD code are compared with those calculated with general purpose commercial CFD solver, CFX. The geometry optimization methodology with RANS based in-house CFD code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI function is developed for in-house CFD code as same as CFX. Even though both analyses are conducted with same computational meshes, numerical error due to GGI function locally occurred in only CFX solver around rod surface and boundary region between inner fluid region and outer fluid region.

  14. Economic evaluation of health promotion interventions for older people: do applied economic studies meet the methodological challenges?

    Science.gov (United States)

    Huter, Kai; Dubas-Jakóbczyk, Katarzyna; Kocot, Ewa; Kissimova-Skarbek, Katarzyna; Rothgang, Heinz

    2018-01-01

    In the light of demographic developments health promotion interventions for older people are gaining importance. In addition to methodological challenges arising from the economic evaluation of health promotion interventions in general, there are specific methodological problems for the particular target group of older people. There are especially four main methodological challenges that are discussed in the literature. They concern measurement and valuation of informal caregiving, accounting for productivity costs, effects of unrelated cost in added life years and the inclusion of 'beyond-health' benefits. This paper focuses on the question whether and to what extent specific methodological requirements are actually met in applied health economic evaluations. Following a systematic review of pertinent health economic evaluations, the included studies are analysed on the basis of four assessment criteria that are derived from methodological debates on the economic evaluation of health promotion interventions in general and economic evaluations targeting older people in particular. Of the 37 studies included in the systematic review, only very few include cost and outcome categories discussed as being of specific relevance to the assessment of health promotion interventions for older people. The few studies that consider these aspects use very heterogeneous methods, thus there is no common methodological standard. There is a strong need for the development of guidelines to achieve better comparability and to include cost categories and outcomes that are relevant for older people. Disregarding these methodological obstacles could implicitly lead to discrimination against the elderly in terms of health promotion and disease prevention and, hence, an age-based rationing of public health care.

  15. Digital processing methodology applied to exploring of radiological images; Metodologia de processamento digital aplicada a exploracao de imagens radiologicas

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Cristiane de Queiroz

    2004-07-01

    In this work, digital image processing is applied as a automatic computational method, aimed for exploring of radiological images. It was developed an automatic routine, from the segmentation and post-processing techniques to the radiology images acquired from an arrangement, consisting of a X-ray tube, target and filter of molybdenum, of 0.4 mm and 0.03 mm, respectively, and CCD detector. The efficiency of the methodology developed is showed in this work, through a case study, where internal injuries in mangoes are automatically detected and monitored. This methodology is a possible tool to be introduced in the post-harvest process in packing houses. A dichotomic test was applied to evaluate a efficiency of the method. The results show a success of 87.7% to correct diagnosis and 12.3% to failures to correct diagnosis with a sensibility of 93% and specificity of 80%. (author)

  16. Complex Network Theory Applied to the Growth of Kuala Lumpur's Public Urban Rail Transit Network.

    Directory of Open Access Journals (Sweden)

    Rui Ding

    Full Text Available Recently, the number of studies involving complex network applications in transportation has increased steadily as scholars from various fields analyze traffic networks. Nonetheless, research on rail network growth is relatively rare. This research examines the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL based on complex network theory and covers both the topological structure of the rail system and future trends in network growth. In addition, network performance when facing different attack strategies is also assessed. Three topological network characteristics are considered: connections, clustering and centrality. In PURTNoKL, we found that the total number of nodes and edges exhibit a linear relationship and that the average degree stays within the interval [2.0488, 2.6774] with heavy-tailed distributions. The evolutionary process shows that the cumulative probability distribution (CPD of degree and the average shortest path length show good fit with exponential distribution and normal distribution, respectively. Moreover, PURTNoKL exhibits clear cluster characteristics; most of the nodes have a 2-core value, and the CPDs of the centrality's closeness and betweenness follow a normal distribution function and an exponential distribution, respectively. Finally, we discuss four different types of network growth styles and the line extension process, which reveal that the rail network's growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those nodes with the largest degrees and the highest betweenness values. This research may enhance the networkability of the rail system and better shape the future growth of public rail networks.

  17. Complex Network Theory Applied to the Growth of Kuala Lumpur's Public Urban Rail Transit Network.

    Science.gov (United States)

    Ding, Rui; Ujang, Norsidah; Hamid, Hussain Bin; Wu, Jianjun

    2015-01-01

    Recently, the number of studies involving complex network applications in transportation has increased steadily as scholars from various fields analyze traffic networks. Nonetheless, research on rail network growth is relatively rare. This research examines the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL) based on complex network theory and covers both the topological structure of the rail system and future trends in network growth. In addition, network performance when facing different attack strategies is also assessed. Three topological network characteristics are considered: connections, clustering and centrality. In PURTNoKL, we found that the total number of nodes and edges exhibit a linear relationship and that the average degree stays within the interval [2.0488, 2.6774] with heavy-tailed distributions. The evolutionary process shows that the cumulative probability distribution (CPD) of degree and the average shortest path length show good fit with exponential distribution and normal distribution, respectively. Moreover, PURTNoKL exhibits clear cluster characteristics; most of the nodes have a 2-core value, and the CPDs of the centrality's closeness and betweenness follow a normal distribution function and an exponential distribution, respectively. Finally, we discuss four different types of network growth styles and the line extension process, which reveal that the rail network's growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those nodes with the largest degrees and the highest betweenness values. This research may enhance the networkability of the rail system and better shape the future growth of public rail networks.

  18. A Quantitative Methodology for Vetting Dark Network Intelligence Sources for Social Network Analysis

    Science.gov (United States)

    2012-06-01

    Figure V-7 Source Stress Contributions for the Example ............................................ V-24  Figure V-8 ROC Curve for the Example...resilience is the ability of the organization “to avoid disintegration when coming under stress (Milward & Raab, 2006, p. 351).” Despite numerous...members of the network. Examples such as subordinates directed to meetings in place of their superiors, virtual participation via telecommuting

  19. Multi-criteria decision making with linguistic labels: a comparison of two methodologies applied to energy planning

    OpenAIRE

    Afsordegan, Arayeh; Sánchez Soler, Monica; Agell Jané, Núria; Cremades Oliver, Lázaro Vicente; Zahedi, Siamak

    2014-01-01

    This paper compares two multi-criteria decision making (MCDM) approaches based on linguistic label assessment. The first approach consists of a modified fuzzy TOPSIS methodology introduced by Kaya and Kahraman in 2011. The second approach, introduced by Agell et al. in 2012, is based on qualitative reasoning techniques for ranking multi-attribute alternatives in group decision-making with linguistic labels. Both approaches are applied to a case of assessment and selection of the most suita...

  20. Implementing the flipped classroom methodology to the subject "Applied computing" of the chemical engineering degree at the University of Barcelona

    Directory of Open Access Journals (Sweden)

    Montserrat Iborra

    2017-06-01

    Full Text Available This work is focus on implementation, development, documentation, analysis and assessment of flipped classroom methodology, by means of just in time teaching strategy, in a pilot group (1 of 6 of the subject “Applied Computing” of Chemical Engineering Undergraduate Degree of the University of Barcelona. The results show that this technique promotes self-learning, autonomy, time management as well as an increase in the effectiveness of classroom hours.

  1. Applying Game Theory in 802.11 Wireless Networks

    Directory of Open Access Journals (Sweden)

    Tomas Cuzanauskas

    2015-07-01

    Full Text Available IEEE 802.11 is one of the most popular wireless technologies in recent days. Due to easiness of adaption and relatively low cost the demand for IEEE 802.11 devices is increasing exponentially. IEEE works in two bands 2.4 GHz and 5 GHz, these bands are known as ISM band. The unlicensed bands are managed by authority which set simple rules to follow when using unlicensed bands, the rules includes requirements as maximum power, out-of-band emissions control as well as interference mitigation. However these rules became outdated as IEEE 802.11 technology is emerging and evolving in hours the rules aren’t well suited for current capabilities of IEEE 802.11 devices. In this article we present game theory based algorithm for IEEE 802.11 wireless devices, we will show that by using game theory it’s possible to achieve better usage of unlicensed spectrum as well as partially decline CSMA/CA. Finally by using this approach we might relax the currently applied maximum power rules for ISM bands, which enable IEEE 802.11 to work on longer distance and have better propagation characteristics.

  2. Development of a cost efficient methodology to perform allocation of flammable and toxic gas detectors applying CFD tools

    Energy Technology Data Exchange (ETDEWEB)

    Storch, Rafael Brod; Rocha, Gean Felipe Almeida [Det Norske Veritas (DNV), Rio de Janeiro, RJ (Brazil); Nalvarte, Gladys Augusta Zevallos [Det Norske Veritas (DNV), Novik (Norway)

    2012-07-01

    This paper is aimed to present a computational procedure for flammable and toxic gas detector allocation and quantification developed by DNV. The proposed methodology applies Computational Fluid Dynamics (CFD) simulations as well as operational and safety characteristics of the analyzed region to assess the optimal number of toxic and flammable gas detectors and their optimal location. A probabilistic approach is also used when applying the DNV software ThorEXPRESSLite, following NORSOK Z013 Annex G and presented in HUSER et al. 2000 and HUSER et al. 2001, when the flammable gas detectors are assessed. A DNV developed program, DetLoc, is used to run in an iterative way the procedure described above leading to an automatic calculation of the gas detectors location and number. The main advantage of the methodology presented above is the independence of human interaction in the gas detector allocation leading to a more precise and free of human judgment allocation. Thus, a reproducible allocation is generated when comparing several different analyses and a global criteria appliance is guaranteed through different regions in the same project. A case study is presented applying the proposed methodology. (author)

  3. Multimedia contaminant environmental exposure assessment methodology as applied to Los Alamos, New Mexico

    International Nuclear Information System (INIS)

    Whelan, G.; Thompson, F.L.; Yabusaki, S.B.

    1983-02-01

    The MCEA (Multimedia Contaminant Environmental Exposure Assessment) methodology assesses exposures to air, water, soil, and plants from contaminants released into the environment by simulating dominant mechanisms of contaminant migration and fate. The methodology encompasses five different pathways (i.e., atmospheric, terrestrial, overland, subsurface, and surface water) and combines them into a highly flexible tool. The flexibility of the MCEA methodology is demonstrated by encompassing two of the pathways (i.e., overland and surface water) into an effective tool for simulating the migration and fate of radionuclides released into the Los Alamos, New Mexico region. The study revealed that: (a) the 239 Pu inventory in lower Los Alamos Canyon increased by approximately 1.1 times for the 50-y flood event; (b) the average contaminant 239 Pu concentrations (i.e., weighted according to the depth of the respective bed layer) in lower Los Alamos Canyon for the 50-y flood event decreased by 5.4%; (c) approx. 27% of the total 239 Pu contamination resuspended from the entire bed (based on the assumed cross sections) for the 50-y flood event originated from lower Pueblo Canyon; (d) an increase in the 239 Pu contamination of the bed followed the general deposition patterns experienced by the sediment in Pueblo-lower Los Alamos Canyon; likewise, a decrease in the 239 Pu contamination of the bed followed general sediment resuspension patterns in the canyon; (e) 55% of the 239 Pu reaching the San Ildefonso Pueblo in lower Los Alamos Canyon originated from lower Los Alamos Canyon; and (f) 56% of the 239 Pu contamination reaching the San Ildefonso Pueblo in lower Los Alamos Canyon was carried through towards the Rio Grande. 47 references, 41 figures, 29 tables

  4. Integrated management of operations in Santos Basin: methodology applied to a new philosophy of operations

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Leandro Leonardo; Lima, Claudio Benevenuto de Campos [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil); Derenzi Neto, Dario [Accenture, Rio de Janeiro, RJ (Brazil); Pinto, Vladimir Steffen [Soda IT, Rio de Janeiro, RJ (Brazil); Lima, Gilson Brito Alves [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil)

    2012-07-01

    The objective of this paper is to present the methodology used to develop the Integrated Management of Operations (GIOp) project in Santos Basin Operational Unit (UO-BS) in the South-Southeast Exploration and Production area of PETROBRAS. The following text describes how the activities were carried out to gather improvements opportunities and to design To-Be processes, considering the challenging environment of the Santos Basin in the coming years. At the end of more than 12 months of work, more than 50 processes and sub-processes were redesigned, involving a multidisciplinary team in the areas of operations, maintenance, safety, health and environment, flow assurance, wells, reservoirs and planning. (author)

  5. A calculation methodology applied for fuel management in PWR type reactors using first order perturbation theory

    International Nuclear Information System (INIS)

    Rossini, M.R.

    1992-01-01

    An attempt has been made to obtain a strategy coherent with the available instruments and that could be implemented with future developments. A calculation methodology was developed for fuel reload in PWR reactors, which evolves cell calculation with the HAMMER-TECHNION code and neutronics calculation with the CITATION code.The management strategy adopted consists of fuel element position changing at the beginning of each reactor cycle in order to decrease the radial peak factor. The bi-dimensional, two group First Order perturbation theory was used for the mathematical modeling. (L.C.J.A.)

  6. Methodology applied by IRSN for nuclear accident cost estimations in France

    International Nuclear Information System (INIS)

    2013-01-01

    This report describes the methodology used by IRSN to estimate the cost of potential nuclear accidents in France. It concerns possible accidents involving pressurized water reactors leading to radioactive releases in the environment. These accidents have been grouped in two accident families called: severe accidents and major accidents. Two model scenarios have been selected to represent each of these families. The report discusses the general methodology of nuclear accident cost estimation. The crucial point is that all cost should be considered: if not, the cost is underestimated which can lead to negative consequences for the value attributed to safety and for crisis preparation. As a result, the overall cost comprises many components: the most well-known is offsite radiological costs, but there are many others. The proposed estimates have thus required using a diversity of methods which are described in this report. Figures are presented at the end of this report. Among other things, they show that purely radiological costs only represent a non-dominant part of foreseeable economic consequences

  7. Quantum Dots Applied to Methodology on Detection of Pesticide and Veterinary Drug Residues.

    Science.gov (United States)

    Zhou, Jia-Wei; Zou, Xue-Mei; Song, Shang-Hong; Chen, Guan-Hua

    2018-02-14

    The pesticide and veterinary drug residues brought by large-scale agricultural production have become one of the issues in the fields of food safety and environmental ecological security. It is necessary to develop the rapid, sensitive, qualitative and quantitative methodology for the detection of pesticide and veterinary drug residues. As one of the achievements of nanoscience, quantum dots (QDs) have been widely used in the detection of pesticide and veterinary drug residues. In these methodology studies, the used QD-signal styles include fluorescence, chemiluminescence, electrochemical luminescence, photoelectrochemistry, etc. QDs can also be assembled into sensors with different materials, such as QD-enzyme, QD-antibody, QD-aptamer, and QD-molecularly imprinted polymer sensors, etc. Plenty of study achievements in the field of detection of pesticide and veterinary drug residues have been obtained from the different combinations among these signals and sensors. They are summarized in this paper to provide a reference for the QD application in the detection of pesticide and veterinary drug residues.

  8. Applying rigorous decision analysis methodology to optimization of a tertiary recovery project

    International Nuclear Information System (INIS)

    Wackowski, R.K.; Stevens, C.E.; Masoner, L.O.; Attanucci, V.; Larson, J.L.; Aslesen, K.S.

    1992-01-01

    This paper reports that the intent of this study was to rigorously look at all of the possible expansion, investment, operational, and CO 2 purchase/recompression scenarios (over 2500) to yield a strategy that would maximize net present value of the CO 2 project at the Rangely Weber Sand Unit. Traditional methods of project management, which involve analyzing large numbers of single case economic evaluations, was found to be too cumbersome and inaccurate for an analysis of this scope. The decision analysis methodology utilized a statistical approach which resulted in a range of economic outcomes. Advantages of the decision analysis methodology included: a more organized approach to classification of decisions and uncertainties; a clear sensitivity method to identify the key uncertainties; an application of probabilistic analysis through the decision tree; and a comprehensive display of the range of possible outcomes for communication to decision makers. This range made it possible to consider the upside and downside potential of the options and to weight these against the Unit's strategies. Savings in time and manpower required to complete the study were also realized

  9. Project-based learning methodology in the area of microbiology applied to undergraduate medical research.

    Science.gov (United States)

    Mateo, Estibaliz; Sevillano, Elena

    2018-07-01

    In the recent years, there has been a decrease in the number of medical professionals dedicated to a research career. There is evidence that students with a research experience during their training acquire knowledge and skills that increase the probability of getting involved in research more successfully. In the Degree of Medicine (University of the Basque Country) the annual core subject 'Research Project' introduces students to research. The aim of this work was to implement a project-based learning methodology, with the students working on microbiology, and to analyse its result along time. Given an initial scenario, the students had to come up with a research idea related to medical microbiology and to carry out a research project, including writing a funding proposal, developing the experimental assays and analyzing and presenting their results to a congress organized by the University. Summative assessment was performed by both students and teachers. A satisfaction survey was carried out to gather the students' opinion. The overall results regarding to the classroom dynamics, learning results and motivation after the implementation were favourable. Students referred a greater interest about research than they had before. They would choose the project based methodology versus the traditional one.

  10. Residency Training: Quality improvement projects in neurology residency and fellowship: applying DMAIC methodology.

    Science.gov (United States)

    Kassardjian, Charles D; Williamson, Michelle L; van Buskirk, Dorothy J; Ernste, Floranne C; Hunderfund, Andrea N Leep

    2015-07-14

    Teaching quality improvement (QI) is a priority for residency and fellowship training programs. However, many medical trainees have had little exposure to QI methods. The purpose of this study is to review a rigorous and simple QI methodology (define, measure, analyze, improve, and control [DMAIC]) and demonstrate its use in a fellow-driven QI project aimed at reducing the number of delayed and canceled muscle biopsies at our institution. DMAIC was utilized. The project aim was to reduce the number of delayed muscle biopsies to 10% or less within 24 months. Baseline data were collected for 12 months. These data were analyzed to identify root causes for muscle biopsy delays and cancellations. Interventions were developed to address the most common root causes. Performance was then remeasured for 9 months. Baseline data were collected on 97 of 120 muscle biopsies during 2013. Twenty biopsies (20.6%) were delayed. The most common causes were scheduling too many tests on the same day and lack of fasting. Interventions aimed at patient education and biopsy scheduling were implemented. The effect was to reduce the number of delayed biopsies to 6.6% (6/91) over the next 9 months. Familiarity with QI methodologies such as DMAIC is helpful to ensure valid results and conclusions. Utilizing DMAIC, we were able to implement simple changes and significantly reduce the number of delayed muscle biopsies at our institution. © 2015 American Academy of Neurology.

  11. Auditory Hallucinations and the Brain’s Resting-State Networks: Findings and Methodological Observations

    Science.gov (United States)

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M.; Horga, Guillermo; Margulies, Daniel S.; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M.; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-01-01

    In recent years, there has been increasing interest in the potential for alterations to the brain’s resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. PMID:27280452

  12. Retail optimization in Romanian metallurgical industry by applying of fuzzy networks concept

    Directory of Open Access Journals (Sweden)

    Ioana Adrian

    2017-01-01

    Full Text Available Our article presents possibilities of applying the concept Fuzzy Networks for an efficient metallurgical industry in Romania. We also present and analyze Fuzzy Networks complementary concepts, such as Expert Systems (ES, Enterprise Resource Planning (ERP, Analytics and Intelligent Strategies (SAI. The main results of our article are based on a case study of the possibilities of applying these concepts in metallurgy through Fuzzy Networks. Also, it is presented a case study on the application of the FUZZY concept on the Romanian metallurgical industry.

  13. The Rock Engineering System (RES) applied to landslide susceptibility zonation of the northeastern flank of Etna: methodological approach and results

    Science.gov (United States)

    Apuani, Tiziana; Corazzato, Claudia

    2015-04-01

    instability-related numerical ratings are assigned to classes. An instability index map is then produced by assigning, to each areal elementary cell (in our case a 10 m pixel), the sum of the products of each weight factor to the normalized parameter rating coming from each input zonation map. This map is then opportunely classified in landslide susceptibility classes (expressed as a percentage), enabling to discriminate areas prone to instability. Overall, the study area is characterized by a low propensity to slope instability. Few areas have an instability index of more than 45% of the theoretical maximum imposed by the matrix. These are located in the few steep slopes associated with active faults, and strongly depending on the seismic activity. Some other areas correspond to limited outcrops characterized by significantly reduced lithotechnical properties (low shear strength). The produced susceptibility map combines the application of the RES with the parameter zonation, following methodology which had never been applied up to now in in active volcanic environments. The comparison of the results with the ground deformation evidence coming from monitoring networks suggests the validity of the approach.

  14. Hazard interactions and interaction networks (cascades) within multi-hazard methodologies

    Science.gov (United States)

    Gill, Joel C.; Malamud, Bruce D.

    2016-08-01

    This paper combines research and commentary to reinforce the importance of integrating hazard interactions and interaction networks (cascades) into multi-hazard methodologies. We present a synthesis of the differences between multi-layer single-hazard approaches and multi-hazard approaches that integrate such interactions. This synthesis suggests that ignoring interactions between important environmental and anthropogenic processes could distort management priorities, increase vulnerability to other spatially relevant hazards or underestimate disaster risk. In this paper we proceed to present an enhanced multi-hazard framework through the following steps: (i) description and definition of three groups (natural hazards, anthropogenic processes and technological hazards/disasters) as relevant components of a multi-hazard environment, (ii) outlining of three types of interaction relationship (triggering, increased probability, and catalysis/impedance), and (iii) assessment of the importance of networks of interactions (cascades) through case study examples (based on the literature, field observations and semi-structured interviews). We further propose two visualisation frameworks to represent these networks of interactions: hazard interaction matrices and hazard/process flow diagrams. Our approach reinforces the importance of integrating interactions between different aspects of the Earth system, together with human activity, into enhanced multi-hazard methodologies. Multi-hazard approaches support the holistic assessment of hazard potential and consequently disaster risk. We conclude by describing three ways by which understanding networks of interactions contributes to the theoretical and practical understanding of hazards, disaster risk reduction and Earth system management. Understanding interactions and interaction networks helps us to better (i) model the observed reality of disaster events, (ii) constrain potential changes in physical and social vulnerability

  15. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  16. The Pediatric Emergency Care Applied Research Network: a history of multicenter collaboration in the United States.

    Science.gov (United States)

    Tzimenatos, Leah; Kim, Emily; Kuppermann, Nathan

    2015-01-01

    In this article, we review the history and progress of a large multicenter research network pertaining to emergency medical services for children. We describe the history, organization, infrastructure, and research agenda of the Pediatric Emergency Care Applied Research Network and highlight some of the important accomplishments since its inception. We also describe the network's strategy to grow its research portfolio, train new investigators, and study how to translate new evidence into practice. This strategy ensures not only the sustainability of the network in the future but the growth of research in emergency medical services for children in general.

  17. Environmental risk assessment of water quality in harbor areas: a new methodology applied to European ports.

    Science.gov (United States)

    Gómez, Aina G; Ondiviela, Bárbara; Puente, Araceli; Juanes, José A

    2015-05-15

    This work presents a standard and unified procedure for assessment of environmental risks at the contaminant source level in port aquatic systems. Using this method, port managers and local authorities will be able to hierarchically classify environmental hazards and proceed with the most suitable management actions. This procedure combines rigorously selected parameters and indicators to estimate the environmental risk of each contaminant source based on its probability, consequences and vulnerability. The spatio-temporal variability of multiple stressors (agents) and receptors (endpoints) is taken into account to provide accurate estimations for application of precisely defined measures. The developed methodology is tested on a wide range of different scenarios via application in six European ports. The validation process confirms its usefulness, versatility and adaptability as a management tool for port water quality in Europe and worldwide. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The HAZOP methodology applied to the study of the quality and the productivity

    International Nuclear Information System (INIS)

    Angel G, J.C.

    1996-01-01

    This article makes reference to an adaptation of the method HAZOP, used in Administration of Risks, to the study and solution of problems related with the quality and the productivity of matters cousins, processes, products and services. The described methodology, it is based in the definition of, intentions, or objectives for each part of the process, sub-process, product or service, with the purpose of finding, deviations, or problems of quality or productivity with the use of words g uide . It thinks about that each deviation should be analyzed for the determination of its causes and consequences, with the purpose of defining the corrective pertinent actions. The work of interdisciplinary groups intends as an unavoidable requirement, the same as the will of its members to make the things better every day

  19. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  20. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    Science.gov (United States)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  1. How to assess solid waste management in armed conflicts? A new methodology applied to the Gaza Strip, Palestine.

    Science.gov (United States)

    Caniato, Marco; Vaccari, Mentore

    2014-09-01

    We have developed a new methodology for assessing solid waste management in a situation of armed conflict. This methodology is composed of six phases with specific activities, and suggested methods and tools. The collection, haulage, and disposal of waste in low- and middle-income countries is so complicated and expensive task for municipalities, owing to several challenges involved, that some waste is left in illegal dumps. Armed conflicts bring further constraints, such as instability, the sudden increase in violence, and difficulty in supplying equipment and spare parts: planning is very difficult and several projects aimed at improving the situation have failed. The methodology was validated in the Gaza Strip, where the geopolitical situation heavily affects natural resources. We collected information in a holistic way, crosschecked, and discussed it with local experts, practitioners, and authorities. We estimated that in 2011 only 1300 tonne day(-1) were transported to the three disposal sites, out of a production exceeding 1700. Recycling was very limited, while the composting capacity was 3.5 tonnes day(-1), but increasing. We carefully assessed system elements and their interaction. We identified the challenges, and developed possible solutions to increase system effectiveness and robustness. The case study demonstrated that our methodology is flexible and adaptable to the context, thus it could be applied in other areas to improve the humanitarian response in similar situations. © The Author(s) 2014.

  2. Energy consumption control automation using Artificial Neural Networks and adaptive algorithms: Proposal of a new methodology and case study

    International Nuclear Information System (INIS)

    Benedetti, Miriam; Cesarotti, Vittorio; Introna, Vito; Serranti, Jacopo

    2016-01-01

    Highlights: • A methodology to enable energy consumption control automation is proposed. • The methodology is based on the use of Artificial Neural Networks. • A method to control the accuracy of the model over time is proposed. • Two methods to enable automatic retraining of the network are proposed. • Retraining methods are evaluated on their accuracy over time. - Abstract: Energy consumption control in energy intensive companies is always more considered as a critical activity to continuously improve energy performance. It undoubtedly requires a huge effort in data gathering and analysis, and the amount of these data together with the scarceness of human resources devoted to Energy Management activities who could maintain and update the analyses’ output are often the main barriers to its diffusion in companies. Advanced tools such as software based on machine learning techniques are therefore the key to overcome these barriers and allow an easy but accurate control. This type of systems is able to solve complex problems obtaining reliable results over time, but not to understand when the reliability of the results is declining (a common situation considering energy using systems, often undergoing structural changes) and to automatically adapt itself using a limited amount of training data, so that a completely automatic application is not yet available and the automatic energy consumption control using intelligent systems is still a challenge. This paper presents a whole new approach to energy consumption control, proposing a methodology based on Artificial Neural Networks (ANNs) and aimed at creating an automatic energy consumption control system. First of all, three different structures of neural networks are proposed and trained using a huge amount of data. Three different performance indicators are then used to identify the most suitable structure, which is implemented to create an energy consumption control tool. In addition, considering that

  3. SCIENTIFIC METHODOLOGY FOR THE APPLIED SOCIAL SCIENCES: CRITICAL ANALYSES ABOUT RESEARCH METHODS, TYPOLOGIES AND CONTRIBUTIONS FROM MARX, WEBER AND DURKHEIM

    Directory of Open Access Journals (Sweden)

    Mauricio Corrêa da Silva

    2015-06-01

    Full Text Available This study aims to discuss the importance of the scientific method to conduct and advertise research in applied social sciences and research typologies, as well as to highlight contributions from Marx, Weber and Durkheim to the scientific methodology. To reach this objective, we conducted a review of the literature on the term research, the scientific method,the research techniques and the scientific methodologies. The results of the investigation revealed that it is fundamental that the academic investigator uses a scientific method to conduct and advertise his/her academic works in applied social sciences in comparison with the biochemical or computer sciences and in the indicated literature. Regarding the contributions to the scientific methodology, we have Marx, dialogued, the dialectical, striking analysis, explicative of social phenomenon, the need to understand the phenomena as historical and concrete totalities; Weber, the distinction between “facts” and “value judgments” to provide objectivity to the social sciences and Durkheim, the need to conceptualize very well its object of study, reject sensible data and imbue with the spirit of discovery and of being surprised with the results.

  4. Service Innovation Methodologies II : How can new product development methodologies be applied to service innovation and new service development? : Report no 2 from the TIPVIS-project

    OpenAIRE

    Nysveen, Herbjørn; Pedersen, Per E.; Aas, Tor Helge

    2007-01-01

    This report presents various methodologies used in new product development and product innovation and discusses the relevance of these methodologies for service development and service innovation. The service innovation relevance for all of the methodologies presented is evaluated along several service specific dimensions, like intangibility, inseparability, heterogeneity, perishability, information intensity, and co-creation. The methodologies discussed are mainly collect...

  5. A replication and methodological critique of the study "Evaluating drug trafficking on the Tor Network"

    DEFF Research Database (Denmark)

    Munksgaard, Rasmus; Demant, Jakob Johan; Branwen, Gwern

    2016-01-01

    The development of cryptomarkets has gained increasing attention from academics, including growing scientific literature on the distribution of illegal goods using cryptomarkets. Dolliver's 2015 article “Evaluating drug trafficking on the Tor Network: Silk Road 2, the Sequel” addresses this theme...... by evaluating drug trafficking on one of the most well-known cryptomarkets, Silk Road 2.0. The research on cryptomarkets in general—particularly in Dolliver's article—poses a number of new questions for methodologies. This commentary is structured around a replication of Dolliver's original study...

  6. Applied nursing informatics research - state-of-the-art methodologies using electronic health record data.

    Science.gov (United States)

    Park, Jung In; Pruinelli, Lisiane; Westra, Bonnie L; Delaney, Connie W

    2014-01-01

    With the pervasive implementation of electronic health records (EHR), new opportunities arise for nursing research through use of EHR data. Increasingly, comparative effectiveness research within and across health systems is conducted to identify the impact of nursing for improving health, health care, and lowering costs of care. Use of EHR data for this type of research requires use of national and internationally recognized nursing terminologies to normalize data. Research methods are evolving as large data sets become available through EHRs. Little is known about the types of research and analytic methods for applied to nursing research using EHR data normalized with nursing terminologies. The purpose of this paper is to report on a subset of a systematic review of peer reviewed studies related to applied nursing informatics research involving EHR data using standardized nursing terminologies.

  7. The case for applying tissue engineering methodologies to instruct human organoid morphogenesis.

    Science.gov (United States)

    Marti-Figueroa, Carlos R; Ashton, Randolph S

    2017-05-01

    Three-dimensional organoids derived from human pluripotent stem cell (hPSC) derivatives have become widely used in vitro models for studying development and disease. Their ability to recapitulate facets of normal human development during in vitro morphogenesis produces tissue structures with unprecedented biomimicry. Current organoid derivation protocols primarily rely on spontaneous morphogenesis processes to occur within 3-D spherical cell aggregates with minimal to no exogenous control. This yields organoids containing microscale regions of biomimetic tissues, but at the macroscale (i.e. 100's of microns to millimeters), the organoids' morphology, cytoarchitecture, and cellular composition are non-biomimetic and variable. The current lack of control over in vitro organoid morphogenesis at the microscale induces aberrations at the macroscale, which impedes realization of the technology's potential to reproducibly form anatomically correct human tissue units that could serve as optimal human in vitro models and even transplants. Here, we review tissue engineering methodologies that could be used to develop powerful approaches for instructing multiscale, 3-D human organoid morphogenesis. Such technological mergers are critically needed to harness organoid morphogenesis as a tool for engineering functional human tissues with biomimetic anatomy and physiology. Human PSC-derived 3-D organoids are revolutionizing the biomedical sciences. They enable the study of development and disease within patient-specific genetic backgrounds and unprecedented biomimetic tissue microenvironments. However, their uncontrolled, spontaneous morphogenesis at the microscale yields inconsistences in macroscale organoid morphology, cytoarchitecture, and cellular composition that limits their standardization and application. Integration of tissue engineering methods with organoid derivation protocols could allow us to harness their potential by instructing standardized in vitro morphogenesis

  8. A conceptual methodology to design a decision support system to leak detection programs in water networks

    International Nuclear Information System (INIS)

    Di Federico, V.; Bottarelli, M.; Di Federico, I.

    2005-01-01

    The paper outlines a conceptual methodology to develop a decision support system to assist technicians managing water networks in selecting the appropriate leak detection method(s). First, the necessary knowledge about the network is recapitulated: location and characteristics of its physical components, but also water demand, breaks in pipes, and water quality data. Second, the water balance in a typical Italian Agency is discussed, suggesting method and procedures to evacuate and/or estimate each term in the mass balance equation. Then the available methods for leak detection are described in detail, from those useful in the pre-localization phase to those commonly adopted to pinpoint pipe failures and allow a rapid repair. Criteria to estimate costs associated with each of these methods are provided. Finally, the proposed structure of the DSS is described [it

  9. Methodology to characterize a residential building stock using a bottom-up approach: a case study applied to Belgium

    Directory of Open Access Journals (Sweden)

    Samuel Gendebien

    2014-06-01

    Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.

  10. A methodology for automation and robotics evaluation applied to the space station telerobotic servicer

    Science.gov (United States)

    Smith, Jeffrey H.; Gyanfi, Max; Volkmer, Kent; Zimmerman, Wayne

    1988-01-01

    The efforts of a recent study aimed at identifying key issues and trade-offs associated with using a Flight Telerobotic Servicer (FTS) to aid in Space Station assembly-phase tasks is described. The use of automation and robotic (A and R) technologies for large space systems would involve a substitution of automation capabilities for human extravehicular or intravehicular activities (EVA, IVA). A methodology is presented that incorporates assessment of candidate assembly-phase tasks, telerobotic performance capabilities, development costs, and effect of operational constraints (space transportation system (STS), attached payload, and proximity operations). Changes in the region of cost-effectiveness are examined under a variety of systems design assumptions. A discussion of issues is presented with focus on three roles the FTS might serve: (1) as a research-oriented testbed to learn more about space usage of telerobotics; (2) as a research based testbed having an experimental demonstration orientation with limited assembly and servicing applications; or (3) as an operational system to augment EVA and to aid the construction of the Space Station and to reduce the programmatic (schedule) risk by increasing the flexibility of mission operations.

  11. Applying CSSI Methodology to the Interpretation of the Audit Expectation Gap

    Directory of Open Access Journals (Sweden)

    Bruno José Machado de Almeida

    2016-09-01

    Full Text Available The development of the audit process is underpinned by a set of concepts that are not generally understood by the users of financial information or by the generality of investors. An audit is developed based on an accounting platform whose abstract or formal object consists of a set of conventions, principles and standards that can give rise to what is called an accounting gap. An audit is also grounded on concepts of risk and materiality which, although considered audit anchors, are not perceived or understood by the general public and thus can give rise to an expectation gap. In addition, a credibility gap can occur, given the degree of judgment implicit in any accounting model: accounting principles and standards determine what should or should not be recognized. Equally present is the gap that exists between reasonable assurance and absolute assurance and the gap with respect to auditor performance. Analysing the concept and meaning of these terms is the main objective of this paper. Based on the model of Blumer, the methodology emphasizes symbolic interactionism, suggesting that, in society, every professional space interprets in a unique way the concepts and symbols used in the communication process. Our study suggests that  concepts are fundamental tools used in social practice for observing and representing the real world, and for acting and working in it.

  12. Pattern recognition and data mining software based on artificial neural networks applied to proton transfer in aqueous environments

    International Nuclear Information System (INIS)

    Tahat Amani; Marti Jordi; Khwaldeh Ali; Tahat Kaher

    2014-01-01

    In computational physics proton transfer phenomena could be viewed as pattern classification problems based on a set of input features allowing classification of the proton motion into two categories: transfer ‘occurred’ and transfer ‘not occurred’. The goal of this paper is to evaluate the use of artificial neural networks in the classification of proton transfer events, based on the feed-forward back propagation neural network, used as a classifier to distinguish between the two transfer cases. In this paper, we use a new developed data mining and pattern recognition tool for automating, controlling, and drawing charts of the output data of an Empirical Valence Bond existing code. The study analyzes the need for pattern recognition in aqueous proton transfer processes and how the learning approach in error back propagation (multilayer perceptron algorithms) could be satisfactorily employed in the present case. We present a tool for pattern recognition and validate the code including a real physical case study. The results of applying the artificial neural networks methodology to crowd patterns based upon selected physical properties (e.g., temperature, density) show the abilities of the network to learn proton transfer patterns corresponding to properties of the aqueous environments, which is in turn proved to be fully compatible with previous proton transfer studies. (condensed matter: structural, mechanical, and thermal properties)

  13. Applying distance-to-target weighing methodology to evaluate the environmental performance of bio-based energy, fuels, and materials

    International Nuclear Information System (INIS)

    Weiss, Martin; Patel, Martin; Heilmeier, Hermann; Bringezu, Stefan

    2007-01-01

    The enhanced use of biomass for the production of energy, fuels, and materials is one of the key strategies towards sustainable production and consumption. Various life cycle assessment (LCA) studies demonstrate the great potential of bio-based products to reduce both the consumption of non-renewable energy resources and greenhouse gas emissions. However, the production of biomass requires agricultural land and is often associated with adverse environmental effects such as eutrophication of surface and ground water. Decision making in favor of or against bio-based and conventional fossil product alternatives therefore often requires weighing of environmental impacts. In this article, we apply distance-to-target weighing methodology to aggregate LCA results obtained in four different environmental impact categories (i.e., non-renewable energy consumption, global warming potential, eutrophication potential, and acidification potential) to one environmental index. We include 45 bio- and fossil-based product pairs in our analysis, which we conduct for Germany. The resulting environmental indices for all product pairs analyzed range from -19.7 to +0.2 with negative values indicating overall environmental benefits of bio-based products. Except for three options of packaging materials made from wheat and cornstarch, all bio-based products (including energy, fuels, and materials) score better than their fossil counterparts. Comparing the median values for the three options of biomass utilization reveals that bio-energy (-1.2) and bio-materials (-1.0) offer significantly higher environmental benefits than bio-fuels (-0.3). The results of this study reflect, however, subjective value judgments due to the weighing methodology applied. Given the uncertainties and controversies associated not only with distance-to-target methodologies in particular but also with weighing approaches in general, the authors strongly recommend using weighing for decision finding only as a

  14. VISUALIZATION OF DATA AND RESULTS AS А METHODOLOGICAL BASIS OF APPLIED STATISTICS TEACHING

    Directory of Open Access Journals (Sweden)

    R. R. Nuriakhmetov

    2014-01-01

    Full Text Available Traditional methods of teaching in medical high school of informatics as computer sciences and statistics as a section of high mathematics contradict to requirements of modern applied medicine and a medical science. A research objective is revealing of the reasons of the given discrepancy and its elimination ways. Similar discrepancy was revealed earlier by foreign researchers studying efficiency of the statistic school programs. The revealed laws appeared to be extended to a technique of teaching of statistics in a high medical school. Pursuing this aim the tests of educational achievements developed by the author were applied on the students of medical and biologic department of the Siberian State Medical Universirty that trained on specialities of “biophysics" and “biochemistry". The fundamental problem of statistical education is that symbols used by these science concern to the objects, which students still have to design. As a substantiation of this conclusion serves the ontosemiotical approach to working out of the maintenance of a course. In the article there are considered the approaches to the permission of the given contradiction, based on the experience of teaching of statistics in foreign schools and on the wor­kings out of the author. In particular the conclusion about necessity of revision the tradition of using professional statistical packages and introduction of a special educational software. To working out the maintenance of a learning course it is offered to more widely apply the historical approach which concrete definition is represented by a principle of a guided reinvention.

  15. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Science.gov (United States)

    Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee

    2016-02-01

    The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  16. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Directory of Open Access Journals (Sweden)

    Saro Lee

    2016-02-01

    Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  17. The Photogrammetric Survey Methodologies Applied to Low Cost 3d Virtual Exploration in Multidisciplinary Field

    Science.gov (United States)

    Palestini, C.; Basso, A.

    2017-11-01

    In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.

  18. Degradation of ticarcillin by subcritial water oxidation method: Application of response surface methodology and artificial neural network modeling.

    Science.gov (United States)

    Yabalak, Erdal

    2018-05-18

    This study was performed to investigate the mineralization of ticarcillin in the artificially prepared aqueous solution presenting ticarcillin contaminated waters, which constitute a serious problem for human health. 81.99% of total organic carbon removal, 79.65% of chemical oxygen demand removal, and 94.35% of ticarcillin removal were achieved by using eco-friendly, time-saving, powerful and easy-applying, subcritical water oxidation method in the presence of a safe-to-use oxidizing agent, hydrogen peroxide. Central composite design, which belongs to the response surface methodology, was applied to design the degradation experiments, to optimize the methods, to evaluate the effects of the system variables, namely, temperature, hydrogen peroxide concentration, and treatment time, on the responses. In addition, theoretical equations were proposed in each removal processes. ANOVA tests were utilized to evaluate the reliability of the performed models. F values of 245.79, 88.74, and 48.22 were found for total organic carbon removal, chemical oxygen demand removal, and ticarcillin removal, respectively. Moreover, artificial neural network modeling was applied to estimate the response in each case and its prediction and optimizing performance was statistically examined and compared to the performance of central composite design.

  19. A G-function-based reliability-based design methodology applied to a cam roller system

    International Nuclear Information System (INIS)

    Wang, W.; Sui, P.; Wu, Y.T.

    1996-01-01

    Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application

  20. Mass Movement Hazards in the Mediterranean; A review on applied techniques and methodologies

    Science.gov (United States)

    Ziade, R.; Abdallah, C.; Baghdadi, N.

    2012-04-01

    Emergent population and expansions of settlements and life-lines over hazardous areas in the Mediterranean region have largely increased the impact of Mass Movements (MM) both in industrialized and developing countries. This trend is expected to continue in the next decades due to increased urbanization and development, continued deforestation and increased regional precipitation in MM-prone areas due to changing climatic patterns. Consequently, and over the past few years, monitoring of MM has acquired great importance from the scientific community as well as the civilian one. This article begins with a discussion of the MM classification, and the different topographic, geologic, hydrologic and environmental impacting factors. The intrinsic (preconditioning) variables determine the susceptibility of MM and extrinsic factors (triggering) can induce the probability of MM occurrence. The evolution of slope instability studies is charted from geodetic or observational techniques, to geotechnical field-based origins to recent higher levels of data acquisition through Remote Sensing (RS) and Geographic Information System (GIS) techniques. Since MM detection and zoning is difficult in remote areas, RS and GIS have enabled regional studies to predominate over site-based ones where they provide multi-temporal images hence facilitate greatly MM monitoring. The unusual extent of the spectrum of MM makes it difficult to define a single methodology to establish MM hazard. Since the probability of occurrence of MM is one of the key components in making rational decisions for management of MM risk, scientists and engineers have developed physical parameters, equations and environmental process models that can be used as assessment tools for management, education, planning and legislative purposes. Assessment of MM is attained through various modeling approaches mainly divided into three main sections: quantitative/Heuristic (1:2.000-1:10.000), semi-quantitative/Statistical (1

  1. Applying Costs, Risks and Values Evaluation (CRAVE) methodology to Engineering Support Request (ESR) prioritization

    Science.gov (United States)

    Joglekar, Prafulla N.

    1994-01-01

    Given limited budget, the problem of prioritization among Engineering Support Requests (ESR's) with varied sizes, shapes, and colors is a difficult one. At the Kennedy Space Center (KSC), the recently developed 4-Matrix (4-M) method represents a step in the right direction as it attempts to combine the traditional criteria of technical merits only with the new concern for cost-effectiveness. However, the 4-M method was not adequately successful in the actual prioritization of ESRs for the fiscal year 1995 (FY95). This research identifies a number of design issues that should help us to develop better methods. It emphasizes that given the variety and diversity of ESR's one should not expect that a single method could help in the assessment of all ESR's. One conclusion is that a methodology such as Costs, Risks, and Values Evaluation (CRAVE) should be adopted. It also is clear that the development of methods such as 4-M requires input not only from engineers with technical expertise in ESR's but also from personnel with adequate background in the theory and practice of cost-effectiveness analysis. At KSC, ESR prioritization is one part of the Ground Support Working Teams (GSWT) Integration Process. It was discovered that the more important barriers to the incorporation of cost-effectiveness considerations in ESR prioritization lie in this process. The culture of integration, and the corresponding structure of review by a committee of peers, is not conducive to the analysis and confrontation necessary in the assessment and prioritization of ESR's. Without assistance from appropriately trained analysts charged with the responsibility to analyze and be confrontational about each ESR, the GSWT steering committee will continue to make its decisions based on incomplete understanding, inconsistent numbers, and at times, colored facts. The current organizational separation of the prioritization and the funding processes is also identified as an important barrier to the

  2. An Appraisal of Social Network Theory and Analysis as Applied to Public Health: Challenges and Opportunities.

    Science.gov (United States)

    Valente, Thomas W; Pitts, Stephanie R

    2017-03-20

    The use of social network theory and analysis methods as applied to public health has expanded greatly in the past decade, yielding a significant academic literature that spans almost every conceivable health issue. This review identifies several important theoretical challenges that confront the field but also provides opportunities for new research. These challenges include (a) measuring network influences, (b) identifying appropriate influence mechanisms, (c) the impact of social media and computerized communications, (d) the role of networks in evaluating public health interventions, and (e) ethics. Next steps for the field are outlined and the need for funding is emphasized. Recently developed network analysis techniques, technological innovations in communication, and changes in theoretical perspectives to include a focus on social and environmental behavioral influences have created opportunities for new theory and ever broader application of social networks to public health topics.

  3. Experimental and NMR theoretical methodology applied to geometric analysis of the bioactive clerodane trans-dehydrocrotonin

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Breno Almeida; Firme, Caio Lima, E-mail: firme.caio@gmail.com, E-mail: caiofirme@quimica.ufrn.br [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil). Instituto de Quimica; Maciel, Maria Aparecida Medeiros [Universidade Potiguar, Natal, RN (Brazil). Programa de Pos-graduacao em Biotecnologia; Kaiser, Carlos R. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Quimica; Schilling, Eduardo; Bortoluzzi, Adailton J. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil). Departamento de Quimica

    2014-04-15

    trans-Dehydrocrotonin (t-DCTN) a bioactive 19-nor-diterpenoid clerodane type isolated from Croton cajucara Benth, is one of the most investigated clerodane in the current literature. In this work, a new approach joining X-ray diffraction data, nuclear magnetic resonance (NMR) data and theoretical calculations was applied to the thorough characterization of t-DCTN. For that, the geometry of t-DCTN was reevaluated by X-ray diffraction as well as {sup 1}H and {sup 13}C NMR data, whose geometrical parameters where compared to those obtained from B3LYP/6-311G++(d,p) level of theory. From the evaluation of both calculated and experimental values of {sup 1}H and {sup 13}C NMR chemical shifts and spin-spin coupling constants, it was found very good correlations between theoretical and experimental magnetic properties of t-DCTN. Additionally, the delocalization indexes between hydrogen atoms correlated accurately with theoretical and experimental spin-spin coupling constants. An additional topological analysis from quantum theory of atoms in molecules (QTAIM) showed intramolecular interactions for t-DCTN. (author)

  4. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  5. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    International Nuclear Information System (INIS)

    Jeffs, S.P.; Lancaster, R.J.; Garcia, T.E.

    2015-01-01

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k SP method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results

  6. Experimental and NMR theoretical methodology applied to geometric analysis of the bioactive clerodane trans-dehydrocrotonin

    International Nuclear Information System (INIS)

    Soares, Breno Almeida; Firme, Caio Lima; Maciel, Maria Aparecida Medeiros; Kaiser, Carlos R.; Schilling, Eduardo; Bortoluzzi, Adailton J.

    2014-01-01

    trans-Dehydrocrotonin (t-DCTN) a bioactive 19-nor-diterpenoid clerodane type isolated from Croton cajucara Benth, is one of the most investigated clerodane in the current literature. In this work, a new approach joining X-ray diffraction data, nuclear magnetic resonance (NMR) data and theoretical calculations was applied to the thorough characterization of t-DCTN. For that, the geometry of t-DCTN was reevaluated by X-ray diffraction as well as 1 H and 13 C NMR data, whose geometrical parameters where compared to those obtained from B3LYP/6-311G++(d,p) level of theory. From the evaluation of both calculated and experimental values of 1 H and 13 C NMR chemical shifts and spin-spin coupling constants, it was found very good correlations between theoretical and experimental magnetic properties of t-DCTN. Additionally, the delocalization indexes between hydrogen atoms correlated accurately with theoretical and experimental spin-spin coupling constants. An additional topological analysis from quantum theory of atoms in molecules (QTAIM) showed intramolecular interactions for t-DCTN. (author)

  7. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  8. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    International Nuclear Information System (INIS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.

    2017-01-01

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  9. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Fernandez, R. Castillo; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anad?n, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Sanchez, L. Escudero; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C. -M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Caicedo, D. A. Martinez; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; S?ldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y. -T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  10. Group method of data handling and neral networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio

    2011-01-01

    The increasing demand in the complexity, efficiency and reliability in modern industrial systems stimulated studies on control theory applied to the development of Monitoring and Fault Detection system. In this work a new Monitoring and Fault Detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and Artificial Neural Networks (ANNs) which was applied to the IEA-R1 research reactor at IPEN. The Monitoring and Fault Detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second part to the process information using ANNs. The GMDH algorithm was used in two different ways: firstly, the GMDH algorithm was used to generate a better database estimated, called matrix z , which was used to train the ANNs. After that, the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one Theoretical Model and four Models using different sets of reactor variables. After an exhausting study dedicated to the sensors Monitoring, the Fault Detection in sensors was developed by simulating faults in the sensors database using values of 5%, 10%, 15% and 20% in these sensors database. The results obtained using GMDH algorithm in the choice of the best input variables to the ANNs were better than that using only ANNs, thus making possible the use of these methods in the implementation of a new Monitoring and Fault Detection methodology applied in sensors. (author)

  11. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  12. Why common carrier and network neutrality principles apply to the Nationwide Health Information Network (NWHIN).

    Science.gov (United States)

    Gaynor, Mark; Lenert, Leslie; Wilson, Kristin D; Bradner, Scott

    2014-01-01

    The Office of the National Coordinator will be defining the architecture of the Nationwide Health Information Network (NWHIN) together with the proposed HealtheWay public/private partnership as a development and funding strategy. There are a number of open questions--for example, what is the best way to realize the benefits of health information exchange? How valuable are regional health information organizations in comparison with a more direct approach? What is the role of the carriers in delivering this service? The NWHIN is to exist for the public good, and thus shares many traits of the common law notion of 'common carriage' or 'public calling,' the modern term for which is network neutrality. Recent policy debates in Congress and resulting potential regulation have implications for key stakeholders within healthcare that use or provide services, and for those who exchange information. To date, there has been little policy debate or discussion about the implications of a neutral NWHIN. This paper frames the discussion for future policy debate in healthcare by providing a brief education and summary of the modern version of common carriage, of the key stakeholder positions in healthcare, and of the potential implications of the network neutrality debate within healthcare.

  13. The Private Lives of Minerals: Social Network Analysis Applied to Mineralogy and Petrology

    Science.gov (United States)

    Hazen, R. M.; Morrison, S. M.; Fox, P. A.; Golden, J. J.; Downs, R. T.; Eleish, A.; Prabhu, A.; Li, C.; Liu, C.

    2016-12-01

    Comprehensive databases of mineral species (rruff.info/ima) and their geographic localities and co-existing mineral assemblages (mindat.org) reveal patterns of mineral association and distribution that mimic social networks, as commonly applied to such varied topics as social media interactions, the spread of disease, terrorism networks, and research collaborations. Applying social network analysis (SNA) to common assemblages of rock-forming igneous and regional metamorphic mineral species, we find patterns of cohesion, segregation, density, and cliques that are similar to those of human social networks. These patterns highlight classic trends in lithologic evolution and are illustrated with sociograms, in which mineral species are the "nodes" and co-existing species form "links." Filters based on chemistry, age, structural group, and other parameters highlight visually both familiar and new aspects of mineralogy and petrology. We quantify sociograms with SNA metrics, including connectivity (based on the frequency of co-occurrence of mineral pairs), homophily (the extent to which co-existing mineral species share compositional and other characteristics), network closure (based on the degree of network interconnectivity), and segmentation (as revealed by isolated "cliques" of mineral species). Exploitation of large and growing mineral data resources with SNA offers promising avenues for discovering previously hidden trends in mineral diversity-distribution systematics, as well as providing new pedagogical approaches to teaching mineralogy and petrology.

  14. Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis

    Directory of Open Access Journals (Sweden)

    Diego Díaz Córdova

    2016-12-01

    Full Text Available En este artículo presentamos dos modalidades metodológicas que aún no han sido muy utilizadas en la antropología alimentaria. Por un lado, nos referimos al análisis de redes sociales y, por otro, a los modelos basados en agentes. Para ilustrar los métodos, tomaremos dos casos de materiales clásicos de la antropología alimentaria. Para el primero usaremos los platos de comida de un relevamiento hecho en la Quebrada de Humahuaca (provincia de Jujuy, Argentina y, para el segundo, utilizaremos algunos elementos del concepto aplicado por Aguirre de “estrategias domésticas de consumo”. La idea subyacente es que, dado que la alimentación se reconoce como un “hecho social total” y, por lo tanto, como un fenómeno complejo, el abordaje metodológico debe seguir necesariamente esa misma característica. Mientras más métodos utilicemos (con el grado de rigor adecuado mejor estaremos preparados para comprender la dinámica alimentaria en el medio social.

  15. Exploring Peer Relationships, Friendships and Group Work Dynamics in Higher Education: Applying Social Network Analysis

    Science.gov (United States)

    Mamas, Christoforos

    2018-01-01

    This study primarily applied social network analysis (SNA) to explore the relationship between friendships, peer social interactions and group work dynamics within a higher education undergraduate programme in England. A critical case study design was adopted so as to allow for an in-depth exploration of the students' voice. In doing so, the views…

  16. Investigation of rotated PCA from the perspective of network communities applied to climate data

    Czech Academy of Sciences Publication Activity Database

    Hartman, David; Hlinka, Jaroslav; Vejmelka, Martin; Paluš, Milan

    2013-01-01

    Roč. 15, - (2013), s. 13124 ISSN 1607-7962. [European Geosciences Union General Assembly 2013. 07.04.2013-12.04.2013, Vienna] R&D Projects: GA ČR GCP103/11/J068 Institutional support: RVO:67985807 Keywords : complex networks * graph theory * climate dynamics Subject RIV: BB - Applied Statistics, Operational Research

  17. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NARCIS (Netherlands)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-01-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a

  18. Neural networks applied to inverters control; Les reseaux de neurones appliques a la commande des convertisseurs

    Energy Technology Data Exchange (ETDEWEB)

    Jammes, B; Marpinard, J C

    1996-12-31

    Neural networks are scarcely applied to power electronics. This attempt includes two different topics: optimal control and computerized simulation. The learning has been performed through output error feedback. For implementation, a buck converter has been used as a voltage pulse generator. (D.L.) 7 refs.

  19. [Optimization of calcium alginate floating microspheres loading aspirin by artificial neural networks and response surface methodology].

    Science.gov (United States)

    Zhang, An-yang; Fan, Tian-yuan

    2010-04-18

    To investigate the preparation and optimization of calcium alginate floating microspheres loading aspirin. A model was used to predict the in vitro release of aspirin and optimize the formulation by artificial neural networks (ANNs) and response surface methodology (RSM). The amounts of the material in the formulation were used as inputs, while the release and floating rate of the microspheres were used as outputs. The performances of ANNs and RSM were compared. ANNs were more accurate in prediction. There was no significant difference between ANNs and RSM in optimization. Approximately 90% of the optimized microspheres could float on the artificial gastric juice over 4 hours. 42.12% of aspirin was released in 60 min, 60.97% in 120 min and 78.56% in 240 min. The release of the drug from the microspheres complied with Higuchi equation. The aspirin floating microspheres with satisfying in vitro release were prepared successfully by the methods of ANNs and RSM.

  20. Applying network theory to prioritize multispecies habitat networks that are robust to climate and land-use change.

    Science.gov (United States)

    Albert, Cécile H; Rayfield, Bronwyn; Dumitru, Maria; Gonzalez, Andrew

    2017-12-01

    Designing connected landscapes is among the most widespread strategies for achieving biodiversity conservation targets. The challenge lies in simultaneously satisfying the connectivity needs of multiple species at multiple spatial scales under uncertain climate and land-use change. To evaluate the contribution of remnant habitat fragments to the connectivity of regional habitat networks, we developed a method to integrate uncertainty in climate and land-use change projections with the latest developments in network-connectivity research and spatial, multipurpose conservation prioritization. We used land-use change simulations to explore robustness of species' habitat networks to alternative development scenarios. We applied our method to 14 vertebrate focal species of periurban Montreal, Canada. Accounting for connectivity in spatial prioritization strongly modified conservation priorities and the modified priorities were robust to uncertain climate change. Setting conservation priorities based on habitat quality and connectivity maintained a large proportion of the region's connectivity, despite anticipated habitat loss due to climate and land-use change. The application of connectivity criteria alongside habitat-quality criteria for protected-area design was efficient with respect to the amount of area that needs protection and did not necessarily amplify trade-offs among conservation criteria. Our approach and results are being applied in and around Montreal and are well suited to the design of ecological networks and green infrastructure for the conservation of biodiversity and ecosystem services in other regions, in particular regions around large cities, where connectivity is critically low. © 2017 Society for Conservation Biology.

  1. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  2. Classification of brain compartments and head injury lesions by neural networks applied to MRI

    International Nuclear Information System (INIS)

    Kischell, E.R.; Kehtarnavaz, N.; Hillman, G.R.; Levin, H.; Lilly, M.; Kent, T.A.

    1995-01-01

    An automatic, neural network-based approach was applied to segment normal brain compartments and lesions on MR images. Two supervised networks, backpropagation (BPN) and counterpropagation, and two unsupervised networks, Kohonen learning vector quantizer and analog adaptive resonance theory, were trained on registered T2-weighted and proton density images. The classes of interest were background, gray matter, white matter, cerebrospinal fluid, macrocystic encephalomalacia, gliosis, and 'unknown'. A comprehensive feature vector was chosen to discriminate these classes. The BPN combined with feature conditioning, multiple discriminant analysis followed by Hotelling transform, produced the most accurate and consistent classification results. Classifications of normal brain compartments were generally in agreement with expert interpretation of the images. Macrocystic encephalomalacia and gliosis were recognized and, except around the periphery, classified in agreement with the clinician's report used to train the neural network. (orig.)

  3. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    International Nuclear Information System (INIS)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez

    2015-01-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  4. Classification of brain compartments and head injury lesions by neural networks applied to MRI

    Energy Technology Data Exchange (ETDEWEB)

    Kischell, E R [Dept. of Electrical Engineering, Texas A and M Univ., College Station, TX (United States); Kehtarnavaz, N [Dept. of Electrical Engineering, Texas A and M Univ., College Station, TX (United States); Hillman, G R [Dept. of Pharmacology, Univ. of Texas Medical Branch, Galveston, TX (United States); Levin, H [Dept. of Neurosurgery, Univ. of Texas Medical Branch, Galveston, TX (United States); Lilly, M [Dept. of Neurosurgery, Univ. of Texas Medical Branch, Galveston, TX (United States); Kent, T A [Dept. of Neurology and Psychiatry, Univ. of Texas Medical Branch, Galveston, TX (United States)

    1995-10-01

    An automatic, neural network-based approach was applied to segment normal brain compartments and lesions on MR images. Two supervised networks, backpropagation (BPN) and counterpropagation, and two unsupervised networks, Kohonen learning vector quantizer and analog adaptive resonance theory, were trained on registered T2-weighted and proton density images. The classes of interest were background, gray matter, white matter, cerebrospinal fluid, macrocystic encephalomalacia, gliosis, and `unknown`. A comprehensive feature vector was chosen to discriminate these classes. The BPN combined with feature conditioning, multiple discriminant analysis followed by Hotelling transform, produced the most accurate and consistent classification results. Classifications of normal brain compartments were generally in agreement with expert interpretation of the images. Macrocystic encephalomalacia and gliosis were recognized and, except around the periphery, classified in agreement with the clinician`s report used to train the neural network. (orig.)

  5. 'Intelligent' triggering methodology for improved detectability of wavelength modulation diode laser absorption spectrometry applied to window-equipped graphite furnaces

    International Nuclear Information System (INIS)

    Gustafsson, Joergen; Axner, Ove

    2003-01-01

    The wavelength modulation-diode laser absorption spectrometry (WM-DLAS) technique experiences a limited detectability when window-equipped sample compartments are used because of multiple reflections between components in the optical system (so-called etalon effects). The problem is particularly severe when the technique is used with a window-equipped graphite furnace (GF) as atomizer since the heating of the furnace induces drifts of the thickness of the windows and thereby also of the background signals. This paper presents a new detection methodology for WM-DLAS applied to a window-equipped GF in which the influence of the background signals from the windows is significantly reduced. The new technique, which is based upon a finding that the WM-DLAS background signals from a window-equipped GF are reproducible over a considerable period of time, consists of a novel 'intelligent' triggering procedure in which the GF is triggered at a user-chosen 'position' in the reproducible drift-cycle of the WM-DLAS background signal. The new methodology makes also use of 'higher-than-normal' detection harmonics, i.e. 4f or 6f, since these previously have shown to have a higher signal-to-background ratio than 2f-detection when the background signals originates from thin etalons. The results show that this new combined background-drift-reducing methodology improves the limit of detection of the WM-DLAS technique used with a window-equipped GF by several orders of magnitude as compared to ordinary 2f-detection, resulting in a limit of detection for a window-equipped GF that is similar to that of an open GF

  6. MODELING AND STRUCTURING OF ENTERPRISE MANAGEMENT SYSTEM RESORT SPHERE BASED ON ELEMENTS OF NEURAL NETWORK THEORY: THE METHODOLOGICAL BASIS

    Directory of Open Access Journals (Sweden)

    Rena R. Timirualeeva

    2015-01-01

    Full Text Available The article describes the methodology of modeling andstructuring of business networks theory. Accounting ofenvironmental factors mega-, macro- and mesolevels, theinternal state of the managed system and the error management command execution by control system implemented inthis. The proposed methodology can improve the quality of enterprise management of resort complex through a moreflexible response to changes in the parameters of the internaland external environments.

  7. Application of the PISC results and methodology to assess the effectiveness of NDT techniques applied on non nuclear components

    International Nuclear Information System (INIS)

    Maciga, G.; Papponetti, M.; Crutzen, S.; Jehenson, P.

    1990-01-01

    Performance demonstration for NDT has been an active topic for several years. Interest in it came to the fore in the early 1980's when several institutions started to propose to use of realistic training assemblies and the formal approach of Validation Centers. These steps were justified for example by the results of the PISC exercises which concluded that there was a need for performance demonstration starting with capability assessment of techniques and procedure as they were routinely applied. If the PISC programme is put under the general ''Nuclear Motivation'', the PISC Methodology could be extended to problems to structural components in general, such as on conventional power plants, chemical, aerospace and offshore industries, where integrity and safety have regarded as being of great importance. Some themes of NDT inspections of fossil power plant and offshore components that could be objects of validation studies will be illustrated. (author)

  8. Application of Bayesian network methodology to the probabilistic risk assessment of nuclear waste disposal facility

    International Nuclear Information System (INIS)

    Lee, Chang Ju

    2006-02-01

    The scenario in a risk analysis can be defined as the propagating feature of specific initiating event which can go to a wide range of undesirable consequences. If one takes various scenarios into consideration, the risk analysis becomes more complex than do without them. A lot of risk analyses have been performed to actually estimate a risk profile under both uncertain future states of hazard sources and undesirable scenarios. Unfortunately, in case of considering some stochastic passive systems such as a radioactive waste disposal facility, since the behaviour of future scenarios is hardly predicted without special reasoning process, we cannot estimate their risk only with a traditional risk analysis methodology. Moreover, it is believed that the sources of uncertainty at future states can be reduced pertinently by setting up dependency relationships interrelating geological, hydrological, and ecological aspects of the site with all the scenarios. It is then required current methodology of uncertainty analysis of the waste disposal facility be revisited under this belief. In order to consider the effects predicting from an evolution of environmental conditions of waste disposal facilities, this study proposes a quantitative assessment framework integrating the inference process of Bayesian network to the traditional probabilistic risk analysis. In this study an approximate probabilistic inference program for the specific Bayesian network developed and verified using a bounded-variance likelihood weighting algorithm. Ultimately, specific models, including a Monte-Carlo model for uncertainty propagation of relevant parameters, were developed with a comparison of variable-specific effects due to the occurrence of diverse altered evolution scenarios (AESs). After providing supporting information to get a variety of quantitative expectations about the dependency relationship between domain variables and AESs, this study could connect the results of probabilistic

  9. A methodology for a quantitative assessment of safety culture in NPPs based on Bayesian networks

    International Nuclear Information System (INIS)

    Kim, Young Gab; Lee, Seung Min; Seong, Poong Hyun

    2017-01-01

    Highlights: • A safety culture framework and a quantitative methodology to assess safety culture were proposed. • The relation among Norm system, Safety Management System and worker's awareness was established. • Safety culture probability at NPPs was updated by collecting actual organizational data. • Vulnerable areas and the relationship between safety culture and human error were confirmed. - Abstract: For a long time, safety has been recognized as a top priority in high-reliability industries such as aviation and nuclear power plants (NPPs). Establishing a safety culture requires a number of actions to enhance safety, one of which is changing the safety culture awareness of workers. The concept of safety culture in the nuclear power domain was established in the International Atomic Energy Agency (IAEA) safety series, wherein the importance of employee attitudes for maintaining organizational safety was emphasized. Safety culture assessment is a critical step in the process of enhancing safety culture. In this respect, assessment is focused on measuring the level of safety culture in an organization, and improving any weakness in the organization. However, many continue to think that the concept of safety culture is abstract and unclear. In addition, the results of safety culture assessments are mostly subjective and qualitative. Given the current situation, this paper suggests a quantitative methodology for safety culture assessments based on a Bayesian network. A proposed safety culture framework for NPPs would include the following: (1) a norm system, (2) a safety management system, (3) safety culture awareness of worker, and (4) Worker behavior. The level of safety culture awareness of workers at NPPs was reasoned through the proposed methodology. Then, areas of the organization that were vulnerable in terms of safety culture were derived by analyzing observational evidence. We also confirmed that the frequency of events involving human error

  10. The Methodology Applied in DPPH, ABTS and Folin-Ciocalteau Assays Has a Large Influence on the Determined Antioxidant Potential.

    Science.gov (United States)

    Abramovič, Helena; Grobin, Blaž; Poklar, Nataša; Cigić, Blaž

    2017-06-01

    Antioxidant potential (AOP) is not only the property of the matrix analyzed but also depends greatly on the methodology used. The chromogenic radicals 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS•+), 2,2-diphenyl-1-picrylhydrazyl (DPPH•) and Folin-Ciocalteu (FC) assay were applied to estimate how the method and the composition of the assay solvent influence the AOP determined for coffee, tea, beer, apple juice and dietary supplements. Large differences between the AOP values depending on the reaction medium were observed, with the highest AOP determined mostly in the FC assay. In reactions with chromogenic radicals several fold higher values of AOP were obtained in buffer pH 7.4 than in water or methanol. The type of assay and solvent composition have similar influences on the reactivity of a particular antioxidant, either pure or as part of a complex matrix. The reaction kinetics of radicals with antioxidants in samples reveals that AOP depends strongly on incubation time, yet differently for each sample analyzed and the assay applied.

  11. A methodology for treating missing data applied to daily rainfall data in the Candelaro River Basin (Italy).

    Science.gov (United States)

    Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe

    2010-01-01

    Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.

  12. European Network of Excellence on NPP residual lifetime prediction methodologies (NULIFE)

    International Nuclear Information System (INIS)

    Badea, M.; Vidican, D.

    2006-01-01

    Within Europe massive investments in nuclear power have been made to meet present and future energy needs. The majority of nuclear reactors have been operating for longer than 20 years and their continuing safe operation depends crucially on effective lifetime management. Furthermore, to extend the economic return on investment and environmental benefits, it is necessary to ensure in advance the safe operation of nuclear reactors for 60 years, a period which is typically 20 years in excess of nominal design life. This depends on a clear understanding of, and predictive capability for, how safety margins may be maintained as components degrade under operational conditions. Ageing mechanisms, environment effects and complex loadings increase the likelihood of damage to safety relevant systems, structures and components. The ability to claim increased benefits from reduced conservatism via improved assessments is therefore of great value. Harmonisation and qualification are essential for industrial exploitation of approaches developed for life prediction methodology. Several European organisations and networks have been at the forefront of the development of advanced methodologies in this area. However, these efforts have largely been made at national level and their overall impact and benefit (in comparison to the situation in the USA) has been reduced by fragmentation. There is a need to restructure the networking approach in order to create a single organisational entity capable of working at European level to produce and exploit R and D in support of the safe and competitive operation of nuclear power plants. It is also critical to ensure the competitiveness of European plant life management (PLIM) services at international level, in particular with the USA and Asian countries. To the above challenges the European Network on European research in residual lifetime prediction methodologies (NULIFE) will: - Create a Europe-wide body in order to achieve scientific and

  13. An input feature selection method applied to fuzzy neural networks for signal esitmation

    International Nuclear Information System (INIS)

    Na, Man Gyun; Sim, Young Rok

    2001-01-01

    It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods

  14. An Intuitive Dominant Test Algorithm of CP-nets Applied on Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Liu Zhaowei

    2014-07-01

    Full Text Available A wireless sensor network is of spatially distributed with autonomous sensors, just like a multi-Agent system with single Agent. Conditional Preference networks is a qualitative tool for representing ceteris paribus (all other things being equal preference statements, it has been a research hotspot in artificial intelligence recently. But the algorithm and complexity of strong dominant test with respect to binary-valued structure CP-nets have not been solved, and few researchers address the application to other domain. In this paper, strong dominant test and application of CP-nets are studied in detail. Firstly, by constructing induced graph of CP-nets and studying its properties, we make a conclusion that the problem of strong dominant test on binary-valued CP-nets is single source shortest path problem essentially, so strong dominant test problem can be solved by improved Dijkstra’s algorithm. Secondly, we apply the algorithm above mentioned to the completeness of wireless sensor network, and design a completeness judging algorithm based on strong dominant test. Thirdly, we apply the algorithm on wireless sensor network to solve routing problem. In the end, we point out some interesting work in the future.

  15. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  16. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    International Nuclear Information System (INIS)

    Correa, R.; Chesta, M.A.; Morales, J.R.; Dinator, M.I.; Requena, I.; Vila, I.

    2006-01-01

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses

  17. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Correa, R. [Universidad Tecnologica Metropolitana, Departamento de Fisica, Av. Jose Pedro Alessandri 1242, Nunoa, Santiago (Chile)]. E-mail: rcorrea@utem.cl; Chesta, M.A. [Universidad Nacional de Cordoba, Facultad de Matematica, Astronomia y Fisica, Medina Allende s/n Ciudad Universitaria, 5000 Cordoba (Argentina)]. E-mail: chesta@famaf.unc.edu.ar; Morales, J.R. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: rmorales@uchile.cl; Dinator, M.I. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: mdinator@uchile.cl; Requena, I. [Universidad de Granada, Departamento de Ciencias de la Computacion e Inteligencia Artificial, Daniel Saucedo Aranda s/n, 18071 Granada (Spain)]. E-mail: requena@decsai.ugr.es; Vila, I. [Universidad de Chile, Facultad de Ciencias, Departamento de Ecologia, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: limnolog@uchile.cl

    2006-08-15

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses.

  18. Applying policy network theory to policy-making in China: the case of urban health insurance reform.

    Science.gov (United States)

    Zheng, Haitao; de Jong, Martin; Koppenjan, Joop

    2010-01-01

    In this article, we explore whether policy network theory can be applied in the People's Republic of China (PRC). We carried out a literature review of how this approach has already been dealt with in the Chinese policy sciences thus far. We then present the key concepts and research approach in policy networks theory in the Western literature and try these on a Chinese case to see the fit. We follow this with a description and analysis of the policy-making process regarding the health insurance reform in China from 1998 until the present. Based on this case study, we argue that this body of theory is useful to describe and explain policy-making processes in the Chinese context. However, limitations in the generic model appear in capturing the fundamentally different political and administrative systems, crucially different cultural values in the applicability of some research methods common in Western countries. Finally, we address which political and cultural aspects turn out to be different in the PRC and how they affect methodological and practical problems that PRC researchers will encounter when studying decision-making processes.

  19. Building research infrastructure in community health centers: a Community Health Applied Research Network (CHARN) report.

    Science.gov (United States)

    Likumahuwa, Sonja; Song, Hui; Singal, Robbie; Weir, Rosy Chang; Crane, Heidi; Muench, John; Sim, Shao-Chee; DeVoe, Jennifer E

    2013-01-01

    This article introduces the Community Health Applied Research Network (CHARN), a practice-based research network of community health centers (CHCs). Established by the Health Resources and Services Administration in 2010, CHARN is a network of 4 community research nodes, each with multiple affiliated CHCs and an academic center. The four nodes (18 individual CHCs and 4 academic partners in 9 states) are supported by a data coordinating center. Here we provide case studies detailing how CHARN is building research infrastructure and capacity in CHCs, with a particular focus on how community practice-academic partnerships were facilitated by the CHARN structure. The examples provided by the CHARN nodes include many of the building blocks of research capacity: communication capacity and "matchmaking" between providers and researchers; technology transfer; research methods tailored to community practice settings; and community institutional review board infrastructure to enable community oversight. We draw lessons learned from these case studies that we hope will serve as examples for other networks, with special relevance for community-based networks seeking to build research infrastructure in primary care settings.

  20. INFLUENCE OF APPLYING ADDITIONAL FORCING FANS FOR THE AIR DISTRIBUTION IN VENTILATION NETWORK

    Directory of Open Access Journals (Sweden)

    Nikodem SZLĄZAK

    2016-07-01

    Full Text Available Mining progress in underground mines cause the ongoing movement of working areas. Consequently, it becomes neces-sary to adapt the ventilation network of a mine to direct airflow into newly-opened districts. For economic reasons, opening new fields is often achieved via underground workings. Length of primary intake and return routes increases and also increases the total resistance of a complex ventilation network. The development of a subsurface structure can make it necessary to change the air distribution in a ventilation network. Increasing airflow into newly-opened districts is necessary. In mines where extraction does not entail gas-related hazards, there is possibility of implementing a push-pull ventilation system in order to supplement airflows to newly developed mining fields. This is achieved by installing sub-surface fan stations with forcing fans at the bottom of downcast shaft. In push-pull systems with multiple main fans, it is vital to select forcing fans with characteristic curves matching those of the existing exhaust fans to prevent undesirable mutual interaction. In complex ventilation networks it is necessary to calculate distribution of airflow (especially in net-works with a large number of installed fans. In the article the influence of applying additional forcing fans for the air distribution in ventilation network for underground mine were considered. There are also analysed the extent of over-pressure caused by the additional forcing fan in branches of the ventilation network (the operating range of additional forcing fan. Possibilities of increasing airflow rate in working areas were conducted.

  1. Development of design methodology for communication network in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Dong Hoon; Seong, Seung Hwan; Jang Gwi Sook; Koo, In Soo; Lee Soon Sung.

    1996-06-01

    This report describe the design methodology of communication network (CN) in nuclear power plants (NPPs). The construction procedure for the NPP CN consists of 4 phases, in study and review phase, design concepts and goals are established through technical review, collection of background information and feasibility study. In design phase, all of design activities such as extraction of requirements, communication modelling, overall and detail architecture design are performed. Implementation and test phase includes the manufacturing, installation and testing of hardware and software. In operation phase, CN construction is finalized through the evaluation and correction during operation. The requirements of CN consist of general requirements such as function, structure, reliability, standardization and detail requirements related with protocol, media, error, performance and etc. CN design also should follow the safety-related requirements such as isolation, redundancy, reliability and verify these requirements. For the selection of each technical element form commercial CN, the evaluation and selection elements are extracted from reliability, performance, operating factors and the required-level which classified into essential, primary, preference, recommendation should be assigned to each element. This report will be used as a technical reference for the CN implementation in NPP. (author). 3 tabs., 5 figs., 25 refs

  2. Artificial neural network methodology: Application to predict magnetic properties of nanocrystalline alloys

    International Nuclear Information System (INIS)

    Hamzaoui, R.; Cherigui, M.; Guessasma, S.; ElKedim, O.; Fenineche, N.

    2009-01-01

    This paper is dedicated to the optimization of magnetic properties of iron based magnetic materials with regard to milling and coating process conditions using artificial neural network methodology. Fe-20 wt.% Ni and Fe-6.5 wt.% Si, alloys were obtained using two high-energy ball milling technologies, namely a planetary ball mill P4 vario ball mill from Fritsch and planetary ball mill from Retch. Further processing of Fe-Si powder allowed the spraying of the feedstock material using high-velocity oxy-fuel (HVOF) process to obtain a relatively dense coating. Input parameters were the disc Ω and vial ω speed rotations for the milling technique, and spray distance and oxygen flow rate in the case of coating process. Two main magnetic parameters are optimized namely the saturation magnetization and the coercivity. Predicted results depict clearly coupled effects of input parameters to vary magnetic parameters. In particular, the increase of saturation magnetization is correlated to the increase of the product Ωω (shock power) and the product of spray parameters. Largest coercivity values are correlated to the increase of the ratio Ω/ω (shock mode process) and the increase of the product of spray parameters.

  3. A Methodological Approach to Assess the Impact of Smarting Action on Electricity Transmission and Distribution Networks Related to Europe 2020 Targets

    Directory of Open Access Journals (Sweden)

    Andrea Bonfiglio

    2017-01-01

    Full Text Available The achievement of the so-called 2020 targets requested by the European Union (EU has determined a significant growth of proposals of solutions and of technical projects aiming at reducing the CO2 emissions and increasing the energy efficiency, as well as the penetration of Renewable Energy Sources (RES in the electric network. As many of them ask for funding from the EU itself, there is the necessity to define a methodology to rank them and decide which projects should be sponsored to obtain the maximum effect on the EU 2020 targets. The present paper aims at (i defining a set of Key Performance Indicators (KPIs to compare different proposals, (ii proposing an analytical methodology to evaluate the defined KPIs and (iii evaluating the maximum impact that the considered action is capable of producing. The proposed methodology is applied to a set of possible interventions performed on a benchmark transmission network test case, in order to show that the defined indicators can be either calculated or measured and that they are useful to rank different “smarting actions”.

  4. Applying the methodology of Design of Experiments to stability studies: a Partial Least Squares approach for evaluation of drug stability.

    Science.gov (United States)

    Jordan, Nika; Zakrajšek, Jure; Bohanec, Simona; Roškar, Robert; Grabnar, Iztok

    2018-05-01

    The aim of the present research is to show that the methodology of Design of Experiments can be applied to stability data evaluation, as they can be seen as multi-factor and multi-level experimental designs. Linear regression analysis is usually an approach for analyzing stability data, but multivariate statistical methods could also be used to assess drug stability during the development phase. Data from a stability study for a pharmaceutical product with hydrochlorothiazide (HCTZ) as an unstable drug substance was used as a case example in this paper. The design space of the stability study was modeled using Umetrics MODDE 10.1 software. We showed that a Partial Least Squares model could be used for a multi-dimensional presentation of all data generated in a stability study and for determination of the relationship among factors that influence drug stability. It might also be used for stability predictions and potentially for the optimization of the extent of stability testing needed to determine shelf life and storage conditions, which would be time and cost-effective for the pharmaceutical industry.

  5. Social Network Analysis as a Methodological Approach to Explore Health Systems: A Case Study Exploring Support among Senior Managers/Executives in a Hospital Network.

    Science.gov (United States)

    De Brún, Aoife; McAuliffe, Eilish

    2018-03-13

    Health systems research recognizes the complexity of healthcare, and the interacting and interdependent nature of components of a health system. To better understand such systems, innovative methods are required to depict and analyze their structures. This paper describes social network analysis as a methodology to depict, diagnose, and evaluate health systems and networks therein. Social network analysis is a set of techniques to map, measure, and analyze social relationships between people, teams, and organizations. Through use of a case study exploring support relationships among senior managers in a newly established hospital group, this paper illustrates some of the commonly used network- and node-level metrics in social network analysis, and demonstrates the value of these maps and metrics to understand systems. Network analysis offers a valuable approach to health systems and services researchers as it offers a means to depict activity relevant to network questions of interest, to identify opinion leaders, influencers, clusters in the network, and those individuals serving as bridgers across clusters. The strengths and limitations inherent in the method are discussed, and the applications of social network analysis in health services research are explored.

  6. Social Network Analysis as a Methodological Approach to Explore Health Systems: A Case Study Exploring Support among Senior Managers/Executives in a Hospital Network

    Directory of Open Access Journals (Sweden)

    Aoife De Brún

    2018-03-01

    Full Text Available Health systems research recognizes the complexity of healthcare, and the interacting and interdependent nature of components of a health system. To better understand such systems, innovative methods are required to depict and analyze their structures. This paper describes social network analysis as a methodology to depict, diagnose, and evaluate health systems and networks therein. Social network analysis is a set of techniques to map, measure, and analyze social relationships between people, teams, and organizations. Through use of a case study exploring support relationships among senior managers in a newly established hospital group, this paper illustrates some of the commonly used network- and node-level metrics in social network analysis, and demonstrates the value of these maps and metrics to understand systems. Network analysis offers a valuable approach to health systems and services researchers as it offers a means to depict activity relevant to network questions of interest, to identify opinion leaders, influencers, clusters in the network, and those individuals serving as bridgers across clusters. The strengths and limitations inherent in the method are discussed, and the applications of social network analysis in health services research are explored.

  7. Mathematical Modelling and Optimization of Cutting Force, Tool Wear and Surface Roughness by Using Artificial Neural Network and Response Surface Methodology in Milling of Ti-6242S

    Directory of Open Access Journals (Sweden)

    Erol Kilickap

    2017-10-01

    Full Text Available In this paper, an experimental study was conducted to determine the effect of different cutting parameters such as cutting speed, feed rate, and depth of cut on cutting force, surface roughness, and tool wear in the milling of Ti-6242S alloy using the cemented carbide (WC end mills with a 10 mm diameter. Data obtained from experiments were defined both Artificial Neural Network (ANN and Response Surface Methodology (RSM. ANN trained network using Levenberg-Marquardt (LM and weights were trained. On the other hand, the mathematical models in RSM were created applying Box Behnken design. Values obtained from the ANN and the RSM was found to be very close to the data obtained from experimental studies. The lowest cutting force and surface roughness were obtained at high cutting speeds and low feed rate and depth of cut. The minimum tool wear was obtained at low cutting speed, feed rate, and depth of cut.

  8. Author-paper affiliation network architecture influences the methodological quality of systematic reviews and meta-analyses of psoriasis.

    Directory of Open Access Journals (Sweden)

    Juan Luis Sanz-Cabanillas

    Full Text Available Moderate-to-severe psoriasis is associated with significant comorbidity, an impaired quality of life, and increased medical costs, including those associated with treatments. Systematic reviews (SRs and meta-analyses (MAs of randomized clinical trials are considered two of the best approaches to the summarization of high-quality evidence. However, methodological bias can reduce the validity of conclusions from these types of studies and subsequently impair the quality of decision making. As co-authorship is among the most well-documented forms of research collaboration, the present study aimed to explore whether authors' collaboration methods might influence the methodological quality of SRs and MAs of psoriasis. Methodological quality was assessed by two raters who extracted information from full articles. After calculating total and per-item Assessment of Multiple Systematic Reviews (AMSTAR scores, reviews were classified as low (0-4, medium (5-8, or high (9-11 quality. Article metadata and journal-related bibliometric indices were also obtained. A total of 741 authors from 520 different institutions and 32 countries published 220 reviews that were classified as high (17.2%, moderate (55%, or low (27.7% methodological quality. The high methodological quality subnetwork was larger but had a lower connection density than the low and moderate methodological quality subnetworks; specifically, the former contained relatively fewer nodes (authors and reviews, reviews by authors, and collaborators per author. Furthermore, the high methodological quality subnetwork was highly compartmentalized, with several modules representing few poorly interconnected communities. In conclusion, structural differences in author-paper affiliation network may influence the methodological quality of SRs and MAs on psoriasis. As the author-paper affiliation network structure affects study quality in this research field, authors who maintain an appropriate balance

  9. Proposal of a methodology to be applied for the characterization of external exposure risk of employees in nuclear medicine services

    International Nuclear Information System (INIS)

    Simoes, Rafael Figueiredo Pohlmann

    2010-01-01

    Nuclear medicine procedure requires the administration of radioactive material by injection, ingestion or inhalation. After incorporation, the patient becomes a mobile source of radiation and, after their examination; they can irradiate everyone on their way out of the Nuclear Medicine Service (NMS). A group of workers in this path is considered a critical group, but there are no conviction on this classification, because there are not measurements available. Thus, workers claiming for occupationally exposed individual's (OEI) rights are common. Employers are always in a complex situation, because if they decided to undertake the individual external monitoring of the critical working groups, the Court considers all as OEI and employers are taxed. On the other hand, if they do not provide monitoring, it is impossible to prove that these workers were not exposed to effective doses higher than individual annual public's limit and they lose the actions, too. This work proposes a methodology to evaluate, using TLD environmental monitors, air kerma rate at critical staff points in a NMS. This method provides relevant information about critical groups' exposure. From these results, the clinic or hospital may prove technically, without individual monitoring of employees, the classification of areas and can estimate the maximum flow of patients in the free areas which guarantees exposures below the public individual dose limit. This methodology has been applied successfully to a private clinic in Rio de Janeiro, which operates a NMS. The only critical group that received exposure statistically different from clinic background radiation was that on the antechamber of the NMS. This is a site that should be characterized as a supervised area and the group of workers in this environment as OEI, as the estimated extrapolated annual effective dose in this position was 1.2 +- 0.7 mSv/year, above the public annual limit (1,0 mSv/year). Normalizing by the number of patients, it can

  10. Matrix product algorithm for stochastic dynamics on networks applied to nonequilibrium Glauber dynamics

    Science.gov (United States)

    Barthel, Thomas; De Bacco, Caterina; Franz, Silvio

    2018-01-01

    We introduce and apply an efficient method for the precise simulation of stochastic dynamical processes on locally treelike graphs. Networks with cycles are treated in the framework of the cavity method. Such models correspond, for example, to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon ideas from quantum many-body theory, our approach is based on a matrix product approximation of the so-called edge messages—conditional probabilities of vertex variable trajectories. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the matrix product edge messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the algorithm has a better error scaling and works for both single instances as well as the thermodynamic limit. We employ it to examine prototypical nonequilibrium Glauber dynamics in the kinetic Ising model. Because of the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations.

  11. A methodology to determine margins by EPID measurements of patient setup variation and motion as applied to immobilization devices

    International Nuclear Information System (INIS)

    Prisciandaro, Joann I.; Frechette, Christina M.; Herman, Michael G.; Brown, Paul D.; Garces, Yolanda I.; Foote, Robert L.

    2004-01-01

    Assessment of clinic and site specific margins are essential for the effective use of three-dimensional and intensity modulated radiation therapy. An electronic portal imaging device (EPID) based methodology is introduced which allows individual and population based CTV-to-PTV margins to be determined and compared with traditional margins prescribed during treatment. This method was applied to a patient cohort receiving external beam head and neck radiotherapy under an IRB approved protocol. Although the full study involved the use of an EPID-based method to assess the impact of (1) simulation technique (2) immobilization, and (3) surgical intervention on inter- and intrafraction variations of individual and population-based CTV-to-PTV margins, the focus of the paper is on the technique. As an illustration, the methodology is utilized to examine the influence of two immobilization devices, the UON TM thermoplastic mask and the Type-S TM head/neck shoulder immobilization system on margins. Daily through port images were acquired for selected fields for each patient with an EPID. To analyze these images, simulation films or digitally reconstructed radiographs (DRR's) were imported into the EPID software. Up to five anatomical landmarks were identified and outlined by the clinician and up to three of these structures were matched for each reference image. Once the individual based errors were quantified, the patient results were grouped into populations by matched anatomical structures and immobilization device. The variation within the subgroup was quantified by calculating the systematic and random errors (Σ sub and σ sub ). Individual patient margins were approximated as 1.65 times the individual-based random error and ranged from 1.1 to 6.3 mm (A-P) and 1.1 to 12.3 mm (S-I) for fields matched on skull and cervical structures, and 1.7 to 10.2 mm (L-R) and 2.0 to 13.8 mm (S-I) for supraclavicular fields. Population-based margins ranging from 5.1 to 6.6 mm (A

  12. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  13. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R.

    2006-01-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  14. Applying network theory to animal movements to identify properties of landscape space use.

    Science.gov (United States)

    Bastille-Rousseau, Guillaume; Douglas-Hamilton, Iain; Blake, Stephen; Northrup, Joseph M; Wittemyer, George

    2018-04-01

    Network (graph) theory is a popular analytical framework to characterize the structure and dynamics among discrete objects and is particularly effective at identifying critical hubs and patterns of connectivity. The identification of such attributes is a fundamental objective of animal movement research, yet network theory has rarely been applied directly to animal relocation data. We develop an approach that allows the analysis of movement data using network theory by defining occupied pixels as nodes and connection among these pixels as edges. We first quantify node-level (local) metrics and graph-level (system) metrics on simulated movement trajectories to assess the ability of these metrics to pull out known properties in movement paths. We then apply our framework to empirical data from African elephants (Loxodonta africana), giant Galapagos tortoises (Chelonoidis spp.), and mule deer (Odocoileous hemionus). Our results indicate that certain node-level metrics, namely degree, weight, and betweenness, perform well in capturing local patterns of space use, such as the definition of core areas and paths used for inter-patch movement. These metrics were generally applicable across data sets, indicating their robustness to assumptions structuring analysis or strategies of movement. Other metrics capture local patterns effectively, but were sensitive to specified graph properties, indicating case specific applications. Our analysis indicates that graph-level metrics are unlikely to outperform other approaches for the categorization of general movement strategies (central place foraging, migration, nomadism). By identifying critical nodes, our approach provides a robust quantitative framework to identify local properties of space use that can be used to evaluate the effect of the loss of specific nodes on range wide connectivity. Our network approach is intuitive, and can be implemented across imperfectly sampled or large-scale data sets efficiently, providing a

  15. BAT methodology applied to the construction of new CCNN; Metodologia BAT aplicada a la construccion de nuevas CCNN

    Energy Technology Data Exchange (ETDEWEB)

    Vilches Rodriguez, E.; Campos Feito, O.; Gonzalez Delgado, J.

    2012-07-01

    The BAT methodology should be used in all phases of the project, from preliminary studies and design to decommissioning, gaining special importance in radioactive waste management and environmental impact studies. Adequate knowledge of this methodology will streamline the decision making process and to facilitate the relationship with regulators and stake holders.

  16. Applying a learning design methodology in the flipped classroom approach – empowering teachers to reflect and design for learning

    Directory of Open Access Journals (Sweden)

    Evangelia Triantafyllou

    2016-05-01

    Full Text Available One of the recent developments in teaching that heavily relies on current technology is the “flipped classroom” approach. In a flipped classroom the traditional lecture and homework sessions are inverted. Students are provided with online material in order to gain necessary knowledge before class, while class time is devoted to clarifications and application of this knowledge. The hypothesis is that there could be deep and creative discussions when teacher and students physically meet. This paper discusses how the learning design methodology can be applied to represent, share and guide educators through flipped classroom designs. In order to discuss the opportunities arising by this approach, the different components of the Learning Design – Conceptual Map (LD-CM are presented and examined in the context of the flipped classroom. It is shown that viewing the flipped classroom through the lens of learning design can promote the use of theories and methods to evaluate its effect on the achievement of learning objectives, and that it may draw attention to the employment of methods to gather learner responses. Moreover, a learning design approach can enforce the detailed description of activities, tools and resources used in specific flipped classroom models, and it can make educators more aware of the decisions that have to be taken and people who have to be involved when designing a flipped classroom. By using the LD-CM, this paper also draws attention to the importance of characteristics and values of different stakeholders (i.e. institutions, educators, learners, and external agents, which influence the design and success of flipped classrooms. Moreover, it looks at the teaching cycle from a flipped instruction model perspective and adjusts it to cater for the reflection loops educators are involved when designing, implementing and re-designing a flipped classroom. Finally, it highlights the effect of learning design on the guidance

  17. RiskSOAP: Introducing and applying a methodology of risk self-awareness in road tunnel safety.

    Science.gov (United States)

    Chatzimichailidou, Maria Mikela; Dokas, Ioannis M

    2016-05-01

    Complex socio-technical systems, such as road tunnels, can be designed and developed with more or less elements that can either positively or negatively affect the capability of their agents to recognise imminent threats or vulnerabilities that possibly lead to accidents. This capability is called risk Situation Awareness (SA) provision. Having as a motive the introduction of better tools for designing and developing systems that are self-aware of their vulnerabilities and react to prevent accidents and losses, this paper introduces the Risk Situation Awareness Provision (RiskSOAP) methodology to the field of road tunnel safety, as a means to measure this capability in this kind of systems. The main objective is to test the soundness and the applicability of RiskSOAP to infrastructure, which is advanced in terms of technology, human integration, and minimum number of safety requirements imposed by international bodies. RiskSOAP is applied to a specific road tunnel in Greece and the accompanying indicator is calculated twice, once for the tunnel design as defined by updated European safety standards and once for the 'as-is' tunnel composition, which complies with the necessary safety requirements, but calls for enhancing safety according to what EU and PIARC further suggest. The derived values indicate the extent to which each tunnel version is capable of comprehending its threats and vulnerabilities based on its elements. The former tunnel version seems to be more enhanced both in terms of it risk awareness capability and safety as well. Another interesting finding is that despite the advanced tunnel safety specifications, there is still room for enriching the safe design and maintenance of the road tunnel. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Modeling and optimization of ethanol fermentation using Saccharomyces cerevisiae: Response surface methodology and artificial neural network

    Directory of Open Access Journals (Sweden)

    Esfahanian Mehri

    2013-01-01

    Full Text Available In this study, the capabilities of response surface methodology (RSM and artificial neural networks (ANN for modeling and optimization of ethanol production from glucoseusing Saccharomyces cerevisiae in batch fermentation process were investigated. Effect of three independent variables in a defined range of pH (4.2-5.8, temperature (20-40ºC and glucose concentration (20-60 g/l on the cell growth and ethanol production was evaluated. Results showed that prediction accuracy of ANN was apparently similar to RSM. At optimum condition of temperature (32°C, pH (5.2 and glucose concentration (50 g/l suggested by the statistical methods, the maximum cell dry weight and ethanol concentration obtained from RSM were 12.06 and 16.2 g/l whereas experimental values were 12.09 and 16.53 g/l, respectively. The present study showed that using ANN as fitness function, the maximum cell dry weight and ethanol concentration were 12.05 and 16.16 g/l, respectively. Also, the coefficients of determination for biomass and ethanol concentration obtained from RSM were 0.9965 and 0.9853 and from ANN were 0.9975 and 0.9936, respectively. The process parameters optimization was successfully conducted using RSM and ANN; however prediction by ANN was slightly more precise than RSM. Based on experimental data maximum yield of ethanol production of 0.5 g ethanol/g substrate (97 % of theoretical yield was obtained.

  19. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  20. Analysis methodology for flow-level evaluation of a hybrid mobile-sensor network

    NARCIS (Netherlands)

    Dimitrova, D.C.; Heijenk, Geert; Braun, T.

    2012-01-01

    Our society uses a large diversity of co-existing wired and wireless networks in order to satisfy its communication needs. A cooper- ation between these networks can benefit performance, service availabil- ity and deployment ease, and leads to the emergence of hybrid networks. This position paper

  1. Assessment of historical leak model methodology as applied to the REDOX high-level waste tank SX-108

    International Nuclear Information System (INIS)

    JONES, T.E.

    1999-01-01

    Using the Historical Leak Model approach, the estimated leak rate (and therefore, projected leak volume) for Tank 241-SX-108 could not be reproduced using the data included in the initial document describing the leak methodology. An analysis of parameters impacting tank heat load calculations strongly suggest that the historical tank operating data lack the precision and accuracy required to estimate tank leak volumes using the Historical Leak Model methodology

  2. Neural Network-Based State Estimation for a Closed-Loop Control Strategy Applied to a Fed-Batch Bioreactor

    Directory of Open Access Journals (Sweden)

    Santiago Rómoli

    2017-01-01

    Full Text Available The lack of online information on some bioprocess variables and the presence of model and parametric uncertainties pose significant challenges to the design of efficient closed-loop control strategies. To address this issue, this work proposes an online state estimator based on a Radial Basis Function (RBF neural network that operates in closed loop together with a control law derived on a linear algebra-based design strategy. The proposed methodology is applied to a class of nonlinear systems with three types of uncertainties: (i time-varying parameters, (ii uncertain nonlinearities, and (iii unmodeled dynamics. To reduce the effect of uncertainties on the bioreactor, some integrators of the tracking error are introduced, which in turn allow the derivation of the proper control actions. This new control scheme guarantees that all signals are uniformly and ultimately bounded, and the tracking error converges to small values. The effectiveness of the proposed approach is illustrated on the basis of simulated experiments on a fed-batch bioreactor, and its performance is compared with two controllers available in the literature.

  3. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    Science.gov (United States)

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  4. Large-Scale Demand Driven Design of a Customized Bus Network: A Methodological Framework and Beijing Case Study

    Directory of Open Access Journals (Sweden)

    Jihui Ma

    2017-01-01

    Full Text Available In recent years, an innovative public transportation (PT mode known as the customized bus (CB has been proposed and implemented in many cities in China to efficiently and effectively shift private car users to PT to alleviate traffic congestion and traffic-related environmental pollution. The route network design activity plays an important role in the CB operation planning process because it serves as the basis for other operation planning activities, for example, timetable development, vehicle scheduling, and crew scheduling. In this paper, according to the demand characteristics and operational purpose, a methodological framework that includes the elements of large-scale travel demand data processing and analysis, hierarchical clustering-based route origin-destination (OD region division, route OD region pairing, and a route selection model is proposed for CB network design. Considering the operating cost and social benefits, a route selection model is proposed and a branch-and-bound-based solution method is developed. In addition, a computer-aided program is developed to analyze a real-world Beijing CB route network design problem. The results of the case study demonstrate that the current CB network of Beijing can be significantly improved, thus demonstrating the effectiveness of the proposed methodology.

  5. A small-world methodology of analysis of interchange energy-networks: The European behaviour in the economical crisis

    International Nuclear Information System (INIS)

    Dassisti, M.; Carnimeo, L.

    2013-01-01

    European energy policy pursues the objective of a sustainable, competitive and reliable supply of energy. In 2007, the European Commission adopted a proper energy policy for Europe supported by several documents and included an action plan to meet the major energy challenges Europe has to face. A farsighted diversified yearly mix of energies was suggested to countries, aiming at increasing security of supply and efficiency, but a wide and systemic view of energy interchanges between states was missing. In this paper, a Small-World methodology of analysis of Interchange Energy-Networks (IENs) is presented, with the aim of providing a useful tool for planning sustainable energy policies. A proof case is presented to validate the methodology by considering the European IEN behaviour in the period of economical crisis. This network is approached as a Small World Net from a modelling point of view, by supposing that connections between States are characterised by a probability value depending on economic/political relations between countries. - Highlights: • Different view of the imports and exports of electric energy flows between European for potential use in ruling exchanges. • Panel data from 1996 to 2010 as part of a network of exchanges was considered from Eurostat official database. • The European import/export energy flows modelled as a network with Small World phenomena, interpreting the evolution over the years. • Interesting systemic tool for ruling and governing energy flows between countries

  6. Actor-Network Theory and methodology: Just what does it mean to say that nonhumans have agency?

    Science.gov (United States)

    Sayes, Edwin

    2014-02-01

    Actor-Network Theory is a controversial social theory. In no respect is this more so than the role it 'gives' to nonhumans: nonhumans have agency, as Latour provocatively puts it. This article aims to interrogate the multiple layers of this declaration to understand what it means to assert with Actor-Network Theory that nonhumans exercise agency. The article surveys a wide corpus of statements by the position's leading figures and emphasizes the wider methodological framework in which these statements are embedded. With this work done, readers will then be better placed to reject or accept the Actor-Network position - understanding more precisely what exactly it is at stake in this decision.

  7. A situational analysis methodology to inform comprehensive HIV prevention and treatment programming, applied in rural South Africa.

    Science.gov (United States)

    Treves-Kagan, Sarah; Naidoo, Evasen; Gilvydis, Jennifer M; Raphela, Elsie; Barnhart, Scott; Lippman, Sheri A

    2017-09-01

    Successful HIV prevention programming requires engaging communities in the planning process and responding to the social environmental factors that shape health and behaviour in a specific local context. We conducted two community-based situational analyses to inform a large, comprehensive HIV prevention programme in two rural districts of North West Province South Africa in 2012. The methodology includes: initial partnership building, goal setting and background research; 1 week of field work; in-field and subsequent data analysis; and community dissemination and programmatic incorporation of results. We describe the methodology and a case study of the approach in rural South Africa; assess if the methodology generated data with sufficient saturation, breadth and utility for programming purposes; and evaluate if this process successfully engaged the community. Between the two sites, 87 men and 105 women consented to in-depth interviews; 17 focus groups were conducted; and 13 health facilities and 7 NGOs were assessed. The methodology succeeded in quickly collecting high-quality data relevant to tailoring a comprehensive HIV programme and created a strong foundation for community engagement and integration with local health services. This methodology can be an accessible tool in guiding community engagement and tailoring future combination HIV prevention and care programmes.

  8. Methodology to evaluate the impact of the erosion in cultivated floors applying the technique of the 137CS

    International Nuclear Information System (INIS)

    Gil Castillo, R.; Peralta Vital, J.L.; Carrazana, J.; Riverol, M.; Penn, F.; Cabrera, E.

    2004-01-01

    The present paper shows the results obtained in the framework of 2 Nuclear Projects, in the topic of application of nuclear techniques to evaluate the erosion rates in cultivated soils. Taking into account the investigations with the 137 CS technique, carried out in the Province of Pinar del Rio, was obtained and validated (first time) a methodology to evaluate the erosion impact in a cropland. The obtained methodology includes all relevant stages for the adequate application of the 137 CS technique, from the initial step of area selection, the soil sampling process, selection of the models and finally, the results evaluation step. During the methodology validation process in soils of the Municipality of San Juan y Martinez, the erosion rates estimated by the methodology and the obtained values by watershed segment measures (traditional technique) were compared in a successful manner. The methodology is a technical guide, for the adequate application of the 137 CS technique to estimate the soil redistribution rates in cultivated soils

  9. PhysarumSpreader: A New Bio-Inspired Methodology for Identifying Influential Spreaders in Complex Networks.

    Directory of Open Access Journals (Sweden)

    Hongping Wang

    Full Text Available Identifying influential spreaders in networks, which contributes to optimizing the use of available resources and efficient spreading of information, is of great theoretical significance and practical value. A random-walk-based algorithm LeaderRank has been shown as an effective and efficient method in recognizing leaders in social network, which even outperforms the well-known PageRank method. As LeaderRank is initially developed for binary directed networks, further extensions should be studied in weighted networks. In this paper, a generalized algorithm PhysarumSpreader is proposed by combining LeaderRank with a positive feedback mechanism inspired from an amoeboid organism called Physarum Polycephalum. By taking edge weights into consideration and adding the positive feedback mechanism, PhysarumSpreader is applicable in both directed and undirected networks with weights. By taking two real networks for examples, the effectiveness of the proposed method is demonstrated by comparing with other standard centrality measures.

  10. PhysarumSpreader: A New Bio-Inspired Methodology for Identifying Influential Spreaders in Complex Networks.

    Science.gov (United States)

    Wang, Hongping; Zhang, Yajuan; Zhang, Zili; Mahadevan, Sankaran; Deng, Yong

    2015-01-01

    Identifying influential spreaders in networks, which contributes to optimizing the use of available resources and efficient spreading of information, is of great theoretical significance and practical value. A random-walk-based algorithm LeaderRank has been shown as an effective and efficient method in recognizing leaders in social network, which even outperforms the well-known PageRank method. As LeaderRank is initially developed for binary directed networks, further extensions should be studied in weighted networks. In this paper, a generalized algorithm PhysarumSpreader is proposed by combining LeaderRank with a positive feedback mechanism inspired from an amoeboid organism called Physarum Polycephalum. By taking edge weights into consideration and adding the positive feedback mechanism, PhysarumSpreader is applicable in both directed and undirected networks with weights. By taking two real networks for examples, the effectiveness of the proposed method is demonstrated by comparing with other standard centrality measures.

  11. A new methodology for strategic planning using technological maps and detection of emerging research fronts applied to radiopharmacy

    International Nuclear Information System (INIS)

    Didio, Robert Joseph

    2011-01-01

    This research aims the development of a new methodology to support the strategic planning, using the process of elaboration of technological maps (TRM - Technological Roadmaps), associated with application of the detection process of emerging fronts of research in databases of scientific publications and patents. The innovation introduced in this research is the customization of the process of TRM to the radiopharmacy and, specifically, its association to the technique of detection of emerging fronts of research, in order to prove results and to establish a new and very useful methodology to the strategic planning of this area of businesses. The business unit DIRF - Diretoria de Radiofarmacia - of IPEN CNEN/SP was used as base of the study and implementation of this methodology presented in this work. (author)

  12. Equivalent electrical network model approach applied to a double acting low temperature differential Stirling engine

    International Nuclear Information System (INIS)

    Formosa, Fabien; Badel, Adrien; Lottin, Jacques

    2014-01-01

    Highlights: • An equivalent electrical network modeling of Stirling engine is proposed. • This model is applied to a membrane low temperate double acting Stirling engine. • The operating conditions (self-startup and steady state behavior) are defined. • An experimental engine is presented and tested. • The model is validated against experimental results. - Abstract: This work presents a network model to simulate the periodic behavior of a double acting free piston type Stirling engine. Each component of the engine is considered independently and its equivalent electrical circuit derived. When assembled in a global electrical network, a global model of the engine is established. Its steady behavior can be obtained by the analysis of the transfer function for one phase from the piston to the expansion chamber. It is then possible to simulate the dynamic (steady state stroke and operation frequency) as well as the thermodynamic performances (output power and efficiency) for given mean pressure, heat source and heat sink temperatures. The motion amplitude especially can be determined by the spring-mass properties of the moving parts and the main nonlinear effects which are taken into account in the model. The thermodynamic features of the model have then been validated using the classical isothermal Schmidt analysis for a given stroke. A three-phase low temperature differential double acting free membrane architecture has been built and tested. The experimental results are compared with the model and a satisfactory agreement is obtained. The stroke and operating frequency are predicted with less than 2% error whereas the output power discrepancy is of about 30%. Finally, some optimization routes are suggested to improve the design and maximize the performances aiming at waste heat recovery applications

  13. Applying Bayesian neural networks to separate neutrino events from backgrounds in reactor neutrino experiments

    International Nuclear Information System (INIS)

    Xu, Y; Meng, Y X; Xu, W W

    2008-01-01

    A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The samples of neutrino events and three major backgrounds from the Monte-Carlo simulation of the toy detector are generated in the signal region. The Bayesian Neural Networks (BNN) are applied to separate neutrino events from backgrounds in reactor neutrino experiments. As a result, the most neutrino events and uncorrelated background events in the signal region can be identified with BNN, and the part events each of the fast neutron and 8 He/ 9 Li backgrounds in the signal region can be identified with BNN. Then, the signal to noise ratio in the signal region is enhanced with BNN. The neutrino discrimination increases with the increase of the neutrino rate in the training sample. However, the background discriminations decrease with the decrease of the background rate in the training sample

  14. Common faults in turbines and applying neural networks in order to fault diagnostic by vibration analysis

    International Nuclear Information System (INIS)

    Masoudifar, M.; AghaAmini, M.

    2001-01-01

    Today the fault diagnostic of the rotating machinery based on the vibration analysis is an effective method in designing predictive maintenance programs. In this method, vibration level of the turbines is monitored and if it is higher than the allowable limit, vibrational data will be analyzed and the growing faults will be detected. But because of the high complexity of the system monitoring, the interpretation of the measured data is more difficult. Therefore, design of the fault diagnostic expert systems by using the expert's technical experiences and knowledge; seem to be the best solution. In this paper,at first several common faults in turbines are studied and the how applying the neural networks to interpret the vibrational data for fault diagnostic is explained

  15. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  16. Virtual target tracking (VTT) as applied to mobile satellite communication networks

    Science.gov (United States)

    Amoozegar, Farid

    1999-08-01

    Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.

  17. The Service-Learning methodology applied to Operations Management: From the Operations Plan to business start up.

    Directory of Open Access Journals (Sweden)

    Constantino García-Ramos

    2017-06-01

    After developing this activity of teaching innovation, we can conclude that the SL is a good methodology to improve the academic, personal and social development of students, suggesting that it is possible to join their academic success with the social commitment of the University.

  18. Methodology and computer program for applying improved, inelastic ERR for the design of mine layouts on planar reefs.

    CSIR Research Space (South Africa)

    Spottiswoode, SM

    2002-08-01

    Full Text Available and the visco-plastic models of Napier and Malan (1997) and Malan (2002). Methodologies and a computer program (MINF) are developed during this project that write synthetic catalogues of seismic events to simulate the rock response to mining...

  19. Hydrogen safety risk assessment methodology applied to a fluidized bed membrane reactor for autothermal reforming of natural gas

    NARCIS (Netherlands)

    Psara, N.; Van Sint Annaland, M.; Gallucci, F.

    2015-01-01

    The scope of this paper is the development and implementation of a safety risk assessment methodology to highlight hazards potentially prevailing during autothermal reforming of natural gas for hydrogen production in a membrane reactor, as well as to reveal potential accidents related to hydrogen

  20. Process design for isolation of soybean oil bodies by applying the product-driven process synthesis methodology

    NARCIS (Netherlands)

    Zderic, A.; Taraksci, T.; Hooshyar, N.; Zondervan, E.; Meuldijk, J.

    2014-01-01

    The present work describes the product driven process synthesis (PDPS) methodology for the conceptual design of extraction of intact oil bodies from soybeans. First, in this approach consumer needs are taken into account and based on these needs application of the final product (oil bodies) is

  1. Assessing Community Informatics: A Review of Methodological Approaches for Evaluating Community Networks and Community Technology Centers.

    Science.gov (United States)

    O'Neil, Dara

    2002-01-01

    Analyzes the emerging community informatics evaluation literature to develop an understanding of the indicators used to gauge project impacts in community networks and community technology centers. The study finds that community networks and community technology center assessments fall into five key areas: strong democracy; social capital;…

  2. Hybrid response surface methodology-artificial neural network optimization of drying process of banana slices in a forced convective dryer.

    Science.gov (United States)

    Taheri-Garavand, Amin; Karimi, Fatemeh; Karimi, Mahmoud; Lotfi, Valiullah; Khoobbakht, Golmohammad

    2018-06-01

    The aim of the study is to fit models for predicting surfaces using the response surface methodology and the artificial neural network to optimize for obtaining the maximum acceptability using desirability functions methodology in a hot air drying process of banana slices. The drying air temperature, air velocity, and drying time were chosen as independent factors and moisture content, drying rate, energy efficiency, and exergy efficiency were dependent variables or responses in the mentioned drying process. A rotatable central composite design as an adequate method was used to develop models for the responses in the response surface methodology. Moreover, isoresponse contour plots were useful to predict the results by performing only a limited set of experiments. The optimum operating conditions obtained from the artificial neural network models were moisture content 0.14 g/g, drying rate 1.03 g water/g h, energy efficiency 0.61, and exergy efficiency 0.91, when the air temperature, air velocity, and drying time values were equal to -0.42 (74.2 ℃), 1.00 (1.50 m/s), and -0.17 (2.50 h) in the coded units, respectively.

  3. Crowdsourcing methodology: establishing the Cervid Disease Network and the North American Mosquito Project.

    Science.gov (United States)

    Cohnstaedt, Lee W; Snyder, Darren; Maki, Elin; Schafer, Shawn

    2016-06-30

    Crowdsourcing is obtaining needed services, ideas, or content by soliciting contributions from a large group of people. This new method of acquiring data works well for single reports, but fails when long-term data collection is needed, mainly due to reporting fatigue or failure of repeated sampling by individuals. To establish a crowdsourced collections network researchers must recruit, reward, and retain contributors to the project. These 3 components of crowdsourcing are discussed using the United States Department of Agriculture social networks, the Cervid Disease Network, and the North American Mosquito Project. The North American Mosquito Project is a large network of professional mosquito control districts and public health agencies, which collects mosquito specimens for genetic studies. The Cervid Disease Network is a crowd-sourced disease monitoring system, which uses voluntary sentinel farms or wildlife programs throughout the United States of America to report the onset and severity of diseases in local areas for pathogen surveillance studies.

  4. Uncovering the Transnational Networks, Organisational Techniques and State-Corporate Ties Behind Grand Corruption: Building an Investigative Methodology

    Directory of Open Access Journals (Sweden)

    Kristian Lasslett

    2017-11-01

    Full Text Available While grand corruption is a major global governance challenge, researchers notably lack a systematic methodology for conducting qualitative research into its complex forms. To address this lacuna, the following article sets out and applies the corruption investigative framework (CIF, a methodology designed to generate a systematic, transferable approach for grand corruption research. Its utility will be demonstrated employing a case study that centres on an Australian-led megaproject being built in Papua New Guinea’s capital city, Port Moresby. Unlike conventional analyses of corruption in Papua New Guinea, which emphasise its local characteristics and patrimonial qualities, application of CIF uncovered new empirical layers that centre on transnational state-corporate power, the ambiguity of civil society, and the structural inequalities that marginalise resistance movements. The important theoretical consequences of the findings and underpinning methodology are explored.

  5. Methodological aspects of market study on residential, commercial and industrial sectors, of the Conversion Project for natural gas of existing network in Sao Paulo city

    International Nuclear Information System (INIS)

    Kishinami, R.I.; Perazza, A.A.

    1991-01-01

    The methodological aspects of market study, developed at the geographical area served by existing network of naphtha gas, which will be converted to natural gas in a two years conversion program are presented. (author)

  6. Hybrid inversions of CO2 fluxes at regional scale applied to network design

    Science.gov (United States)

    Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank

    2013-04-01

    Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes

  7. Applying self-organizing map and modified radial based neural network for clustering and routing optimal path in wireless network

    Science.gov (United States)

    Hoomod, Haider K.; Kareem Jebur, Tuka

    2018-05-01

    Mobile ad hoc networks (MANETs) play a critical role in today’s wireless ad hoc network research and consist of active nodes that can be in motion freely. Because it consider very important problem in this network, we suggested proposed method based on modified radial basis function networks RBFN and Self-Organizing Map SOM. These networks can be improved by the use of clusters because of huge congestion in the whole network. In such a system, the performance of MANET is improved by splitting the whole network into various clusters using SOM. The performance of clustering is improved by the cluster head selection and number of clusters. Modified Radial Based Neural Network is very simple, adaptable and efficient method to increase the life time of nodes, packet delivery ratio and the throughput of the network will increase and connection become more useful because the optimal path has the best parameters from other paths including the best bitrate and best life link with minimum delays. Proposed routing algorithm depends on the group of factors and parameters to select the path between two points in the wireless network. The SOM clustering average time (1-10 msec for stall nodes) and (8-75 msec for mobile nodes). While the routing time range (92-510 msec).The proposed system is faster than the Dijkstra by 150-300%, and faster from the RBFNN (without modify) by 145-180%.

  8. Artificial neural network and response surface methodology modeling in mass transfer parameters predictions during osmotic dehydration of Carica papaya L.

    Directory of Open Access Journals (Sweden)

    J. Prakash Maran

    2013-09-01

    Full Text Available In this study, a comparative approach was made between artificial neural network (ANN and response surface methodology (RSM to predict the mass transfer parameters of osmotic dehydration of papaya. The effects of process variables such as temperature, osmotic solution concentration and agitation speed on water loss, weight reduction, and solid gain during osmotic dehydration were investigated using a three-level three-factor Box-Behnken experimental design. Same design was utilized to train a feed-forward multilayered perceptron (MLP ANN with back-propagation algorithm. The predictive capabilities of the two methodologies were compared in terms of root mean square error (RMSE, mean absolute error (MAE, standard error of prediction (SEP, model predictive error (MPE, chi square statistic (χ2, and coefficient of determination (R2 based on the validation data set. The results showed that properly trained ANN model is found to be more accurate in prediction as compared to RSM model.

  9. Development of an analysis methodology applied to 4πβ-γ software coincidence data acquisition system

    International Nuclear Information System (INIS)

    Brancaccio, Franco; Dias, Mauro da Silva; Toledo, Fabio de

    2009-01-01

    The present work describes the new software methodology under development at the IPEN Nuclear Metrology Laboratory for radionuclide standardizations with 4πβ-γ coincidence technique. The software includes the Coincidence Graphic User Interface (GUI) and the Coincidence Analysis Program. The first results for a 60 Co sample measurement are discussed and compared to the results obtained with two different conventional coincidence systems. (author)

  10. Evaluation of constraint methodologies applied to a shallow-flaw cruciform bend specimen tested under biaxial loading conditions

    International Nuclear Information System (INIS)

    Bass, B.R.; McAfee, W.J.; Williams, P.T.; Pennell, W.E.

    1998-01-01

    A technology to determine shallow-flaw fracture toughness of reactor pressure vessel (RPV) steels is being developed for application to the safety assessment of RPVs containing postulated shallow surface flaws. Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shallow surface flaws. The cruciform beam specimens were developed at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far-field. out-of-plane biaxial stress component in the test section that approximates the nonlinear stresses resulting from pressurized-thermal-shock or pressure-temperature loading of an RPV. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. The cruciform fracture toughness data were used to evaluate fracture methodologies for predicting the observed effects of biaxial loading on shallow-flaw fracture toughness. Initial emphasis was placed on assessment of stress-based methodologies. namely, the J-Q formulation, the Dodds-Anderson toughness scaling model, and the Weibull approach. Applications of these methodologies based on the hydrostatic stress fracture criterion indicated an effect of loading-biaxiality on fracture toughness, the conventional maximum principal stress criterion indicated no effect

  11. Direct cost analysis of intensive care unit stay in four European countries: applying a standardized costing methodology.

    Science.gov (United States)

    Tan, Siok Swan; Bakker, Jan; Hoogendoorn, Marga E; Kapila, Atul; Martin, Joerg; Pezzi, Angelo; Pittoni, Giovanni; Spronk, Peter E; Welte, Robert; Hakkaart-van Roijen, Leona

    2012-01-01

    The objective of the present study was to measure and compare the direct costs of intensive care unit (ICU) days at seven ICU departments in Germany, Italy, the Netherlands, and the United Kingdom by means of a standardized costing methodology. A retrospective cost analysis of ICU patients was performed from the hospital's perspective. The standardized costing methodology was developed on the basis of the availability of data at the seven ICU departments. It entailed the application of the bottom-up approach for "hotel and nutrition" and the top-down approach for "diagnostics," "consumables," and "labor." Direct costs per ICU day ranged from €1168 to €2025. Even though the distribution of costs varied by cost component, labor was the most important cost driver at all departments. The costs for "labor" amounted to €1629 at department G but were fairly similar at the other departments (€711 ± 115). Direct costs of ICU days vary widely between the seven departments. Our standardized costing methodology could serve as a valuable instrument to compare actual cost differences, such as those resulting from differences in patient case-mix. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. On the Development of Methodology for Planning and Cost-Modeling of a Wide Area Network

    OpenAIRE

    Ahmedi, Basri; Mitrevski, Pece

    2014-01-01

    The most important stages in designing a computer network in a wider geographical area include: definition of requirements, topological description, identification and calculation of relevant parameters (i.e. traffic matrix), determining the shortest path between nodes, quantification of the effect of various levels of technical and technological development of urban areas involved, the cost of technology, and the cost of services. These parameters differ for WAN networks in different regions...

  13. Development of Fast-Running Simulation Methodology Using Neural Networks for Load Follow Operation

    International Nuclear Information System (INIS)

    Seong, Seung-Hwan; Park, Heui-Youn; Kim, Dong-Hoon; Suh, Yong-Suk; Hur, Seop; Koo, In-Soo; Lee, Un-Chul; Jang, Jin-Wook; Shin, Yong-Chul

    2002-01-01

    A new fast-running analytic model has been developed for analyzing the load follow operation. The new model was based on the neural network theory, which has the capability of modeling the input/output relationships of a nonlinear system. The new model is made up of two error back-propagation neural networks and procedures to calculate core parameters, such as the distributions and density of xenon in a quasi-steady-state core like load follow operation. One neural network is designed to retrieve the axial offset of power distribution, and the other is for reactivity corresponding to a given core condition. The training data sets for learning the neural networks in the new model are generated with a three-dimensional nodal code and, also, the measured data of the first-day test of load follow operation. Using the new model, the simulation results of the 5-day load follow test in a pressurized water reactor show a good agreement between the simulation data and the actual measured data. Required computing time for simulating a load follow operation is comparable to that of a fast-running lumped model. Moreover, the new model does not require additional engineering factors to compensate for the difference between the actual measurements and analysis results because the neural network has the inherent learning capability of neural networks to new situations

  14. Mathematical Modeling and Analysis Methodology for Opportunistic Routing in Wireless Multihop Networks

    Directory of Open Access Journals (Sweden)

    Wang Dongyang

    2015-01-01

    Full Text Available Modeling the forwarding feature and analyzing the performance theoretically for opportunistic routing in wireless multihop network are of great challenge. To address this issue, a generalized geometric distribution (GGD is firstly proposed. Based on the GGD, the forwarding probability between any two forwarding candidates could be calculated and it can be proved that the successful delivery rate after several transmissions of forwarding candidates is irrelevant to the priority rule. Then, a discrete-time queuing model is proposed to analyze mean end-to-end delay (MED of a regular opportunistic routing with the knowledge of the forwarding probability. By deriving the steady-state joint generating function of the queue length distribution, MED for directly connected networks and some special cases of nondirectly connected networks could be ultimately determined. Besides, an approximation approach is proposed to assess MED for the general cases in the nondirectly connected networks. By comparing with a large number of simulation results, the rationality of the analysis is validated. Both the analysis and simulation results show that MED varies with the number of forwarding candidates, especially when it comes to connected networks; MED increases more rapidly than that in nondirectly connected networks with the increase of the number of forwarding candidates.

  15. A DNA-Inspired Encryption Methodology for Secure, Mobile Ad Hoc Networks

    Science.gov (United States)

    Shaw, Harry

    2012-01-01

    Users are pushing for greater physical mobility with their network and Internet access. Mobile ad hoc networks (MANET) can provide an efficient mobile network architecture, but security is a key concern. A figure summarizes differences in the state of network security for MANET and fixed networks. MANETs require the ability to distinguish trusted peers, and tolerate the ingress/egress of nodes on an unscheduled basis. Because the networks by their very nature are mobile and self-organizing, use of a Public Key Infra structure (PKI), X.509 certificates, RSA, and nonce ex changes becomes problematic if the ideal of MANET is to be achieved. Molecular biology models such as DNA evolution can provide a basis for a proprietary security architecture that achieves high degrees of diffusion and confusion, and resistance to cryptanalysis. A proprietary encryption mechanism was developed that uses the principles of DNA replication and steganography (hidden word cryptography) for confidentiality and authentication. The foundation of the approach includes organization of coded words and messages using base pairs organized into genes, an expandable genome consisting of DNA-based chromosome keys, and a DNA-based message encoding, replication, and evolution and fitness. In evolutionary computing, a fitness algorithm determines whether candidate solutions, in this case encrypted messages, are sufficiently encrypted to be transmitted. The technology provides a mechanism for confidential electronic traffic over a MANET without a PKI for authenticating users.

  16. Optimization of extraction of linarin from Flos chrysanthemi indici by response surface methodology and artificial neural network.

    Science.gov (United States)

    Pan, Hongye; Zhang, Qing; Cui, Keke; Chen, Guoquan; Liu, Xuesong; Wang, Longhu

    2017-05-01

    The extraction of linarin from Flos chrysanthemi indici by ethanol was investigated. Two modeling techniques, response surface methodology and artificial neural network, were adopted to optimize the process parameters, such as, ethanol concentration, extraction period, extraction frequency, and solvent to material ratio. We showed that both methods provided good predictions, but artificial neural network provided a better and more accurate result. The optimum process parameters include, ethanol concentration of 74%, extraction period of 2 h, extraction three times, solvent to material ratio of 12 mL/g. The experiment yield of linarin was 90.5% that deviated less than 1.6% from that obtained by predicted result. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Gust factor based on research aircraft measurements: A new methodology applied to the Arctic marine boundary layer

    DEFF Research Database (Denmark)

    Suomi, Irene; Lüpkes, Christof; Hartmann, Jörg

    2016-01-01

    There is as yet no standard methodology for measuring wind gusts from a moving platform. To address this, we have developed a method to derive gusts from research aircraft data. First we evaluated four different approaches, including Taylor's hypothesis of frozen turbulence, to derive the gust...... in unstable conditions (R2=0.52). The mean errors for all methods were low, from -0.02 to 0.05, indicating that wind gust factors can indeed be measured from research aircraft. Moreover, we showed that aircraft can provide gust measurements within the whole boundary layer, if horizontal legs are flown...

  18. APPLYING LCC METHODOLOGY FOR THE EVALUATION OF THE EFFECTIVENESS OF AN INVESTMENT OF PROJECTS OF THE SEWAGE TREATMENT PLANT

    Directory of Open Access Journals (Sweden)

    Anatoli Hurynovich

    2016-06-01

    Full Text Available The article is referring to the current problems of the evaluation of the effectiveness of an investment of new plans and of the modernization of existing sewage treatment plants including aspects of the optimization of costs of the sewage treatment. He shows the modernization of the sewage treatment plant the Evaluation of the life cycle as the adequate tool assisting choice of the best variant or the level. He is presenting characteristics of LCС methodology and examples of using to the technology assessment the sewage treatment.

  19. Applying Statistical and Complex Network Methods to Explore the Key Signaling Molecules of Acupuncture Regulating Neuroendocrine-Immune Network

    Directory of Open Access Journals (Sweden)

    Kuo Zhang

    2018-01-01

    Full Text Available The mechanisms of acupuncture are still unclear. In order to reveal the regulatory effect of manual acupuncture (MA on the neuroendocrine-immune (NEI network and identify the key signaling molecules during MA modulating NEI network, we used a rat complete Freund’s adjuvant (CFA model to observe the analgesic and anti-inflammatory effect of MA, and, what is more, we used statistical and complex network methods to analyze the data about the expression of 55 common signaling molecules of NEI network in ST36 (Zusanli acupoint, and serum and hind foot pad tissue. The results indicate that MA had significant analgesic, anti-inflammatory effects on CFA rats; the key signaling molecules may play a key role during MA regulating NEI network, but further research is needed.

  20. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    Energy Technology Data Exchange (ETDEWEB)

    Philip, Bobby, E-mail: philipb@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Dilts, Gary A. [Los Alamos National Laboratory, PO Box 1663, Los Alamos, NM 87545 (United States)

    2015-04-01

    This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.

  1. Methodology for applying monitored natural attenuation to petroleum hydrocarbon-contaminated ground-water systems with examples from South Carolina

    Science.gov (United States)

    Chapelle, Frank H.; Robertson, John F.; Landmeyer, James E.; Bradley, Paul M.

    2000-01-01

    Natural attenuation processes such as dispersion, advection, and biogradation serve to decrease concentrations of disssolved contaminants as they are transported in all ground-water systems.  However, the efficiency of these natural attenuation processes and the degree to which they help attain remediation goals, varies considerably from site to site.  This report provides a methodology for quantifying various natural attenuation mechanisms.  This methodology incorporates information on (1) concentrations of contaminants in space and/or time; (2) ambient reduction/oxidation (redox) conditions; (3) rates and directions of ground-water flow; (4) rates of contaminant biodegradation; and (5) demographic considerations, such as the presence of nearby receptor exposure points or property boundaries.  This document outlines the hydrologic, geochemical, and biologic data needed to assess the efficiency of natural attenuation, provides a screening tool for making preliminary assessments, and provides examples of how to determine when natural attenuation can be a useful component of site remediation at leaking underground storage tank sites.

  2. Using the OASES-A to illustrate how network analysis can be applied to understand the experience of stuttering.

    Science.gov (United States)

    Siew, Cynthia S Q; Pelczarski, Kristin M; Yaruss, J Scott; Vitevitch, Michael S

    Network science uses mathematical and computational techniques to examine how individual entities in a system, represented by nodes, interact, as represented by connections between nodes. This approach has been used by Cramer et al. (2010) to make "symptom networks" to examine various psychological disorders. In the present analysis we examined a network created from the items in the Overall Assessment of the Speaker's Experience of Stuttering-Adult (OASES-A), a commonly used measure for evaluating adverse impact in the lives of people who stutter. The items of the OASES-A were represented as nodes in the network. Connections between nodes were placed if responses to those two items in the OASES-A had a correlation coefficient greater than ±0.5. Several network analyses revealed which nodes were "important" in the network. Several centrally located nodes and "key players" in the network were identified. A community detection analysis found groupings of nodes that differed slightly from the subheadings of the OASES-A. Centrally located nodes and "key players" in the network may help clinicians prioritize treatment. The different community structure found for people who stutter suggests that the way people who stutter view stuttering may differ from the way that scientists and clinicians view stuttering. Finally, the present analyses illustrate how the network approach might be applied to other speech, language, and hearing disorders to better understand how those disorders are experienced and to provide insights for their treatment. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    Directory of Open Access Journals (Sweden)

    Fernando Gimeno Bellver

    Full Text Available In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems.The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software.Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper. Keywords: Electrical analogy, Network Simulation Method, Josephson junction, Chaos indicator, Fast Fourier Transform

  4. Unit Commitment Towards Decarbonized Network Facing Fixed and Stochastic Resources Applying Water Cycle Optimization

    Directory of Open Access Journals (Sweden)

    Heba-Allah I. ElAzab

    2018-05-01

    Full Text Available This paper presents a trustworthy unit commitment study to schedule both Renewable Energy Resources (RERs with conventional power plants to potentially decarbonize the electrical network. The study has employed a system with three IEEE thermal (coal-fired power plants as dispatchable distributed generators, one wind plant, one solar plant as stochastic distributed generators, and Plug-in Electric Vehicles (PEVs which can work either loads or generators based on their charging schedule. This paper investigates the unit commitment scheduling objective to minimize the Combined Economic Emission Dispatch (CEED. To reduce combined emission costs, integrating more renewable energy resources (RER and PEVs, there is an essential need to decarbonize the existing system. Decarbonizing the system means reducing the percentage of CO2 emissions. The uncertain behavior of wind and solar energies causes imbalance penalty costs. PEVs are proposed to overcome the intermittent nature of wind and solar energies. It is important to optimally integrate and schedule stochastic resources including the wind and solar energies, and PEVs charge and discharge processes with dispatched resources; the three IEEE thermal (coal-fired power plants. The Water Cycle Optimization Algorithm (WCOA is an efficient and intelligent meta-heuristic technique employed to solve the economically emission dispatch problem for both scheduling dispatchable and stochastic resources. The goal of this study is to obtain the solution for unit commitment to minimize the combined cost function including CO2 emission costs applying the Water Cycle Optimization Algorithm (WCOA. To validate the WCOA technique, the results are compared with the results obtained from applying the Dynamic Programming (DP algorithm, which is considered as a conventional numerical technique, and with the Genetic Algorithm (GA as a meta-heuristic technique.

  5. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  6. Separation and Determination of Honokiol and Magnolol in Chinese Traditional Medicines by Capillary Electrophoresis with the Application of Response Surface Methodology and Radial Basis Function Neural Network

    Science.gov (United States)

    Han, Ping; Luan, Feng; Yan, Xizu; Gao, Yuan; Liu, Huitao

    2012-01-01

    A method for the separation and determination of honokiol and magnolol in Magnolia officinalis and its medicinal preparation is developed by capillary zone electrophoresis and response surface methodology. The concentration of borate, content of organic modifier, and applied voltage are selected as variables. The optimized conditions (i.e., 16 mmol/L sodium tetraborate at pH 10.0, 11% methanol, applied voltage of 25 kV and UV detection at 210 nm) are obtained and successfully applied to the analysis of honokiol and magnolol in Magnolia officinalis and Huoxiang Zhengqi Liquid. Good separation is achieved within 6 min. The limits of detection are 1.67 µg/mL for honokiol and 0.83 µg/mL for magnolol, respectively. In addition, an artificial neural network with “3-7-1” structure based on the ratio of peak resolution to the migration time of the later component (Rs/t) given by Box-Behnken design is also reported, and the predicted results are in good agreement with the values given by the mathematic software and the experimental results. PMID:22291059

  7. Nutrients interaction investigation to improve Monascus purpureus FTC5391 growth rate using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Mohamad, R.

    2013-01-01

    Full Text Available Aims: Two vital factors, certain environmental conditions and nutrients as a source of energy are entailed for successful growth and reproduction of microorganisms. Manipulation of nutritional requirement is the simplest and most effectual strategy to stimulate and enhance the activity of microorganisms. Methodology and Results: In this study, response surface methodology (RSM and artificial neural network (ANN were employed to optimize the carbon and nitrogen sources in order to improve growth rate of Monascus purpureus FTC5391,a new local isolate. The best models for optimization of growth rate were a multilayer full feed-forward incremental back propagation network, and a modified response surface model using backward elimination. The optimum condition for cell mass production was: sucrose 2.5%, yeast extract 0.045%, casamino acid 0.275%, sodium nitrate 0.48%, potato starch 0.045%, dextrose 1%, potassium nitrate 0.57%. The experimental cell mass production using this optimal condition was 21 mg/plate/12days, which was 2.2-fold higher than the standard condition (sucrose 5%, yeast extract 0.15%, casamino acid 0.25%, sodium nitrate 0.3%, potato starch 0.2%, dextrose 1%, potassium nitrate 0.3%. Conclusion, significance and impact of study: The results of RSM and ANN showed that all carbon and nitrogen sources tested had significant effect on growth rate (P-value < 0.05. In addition the use of RSM and ANN alongside each other provided a proper growth prediction model.

  8. A Methodology for the Optimization of Flow Rate Injection to Looped Water Distribution Networks through Multiple Pumping Stations

    Directory of Open Access Journals (Sweden)

    Christian León-Celi

    2016-12-01

    Full Text Available The optimal function of a water distribution network is reached when the consumer demands are satisfied using the lowest quantity of energy, maintaining the minimal pressure required at the same time. One way to achieve this is through optimization of flow rate injection based on the use of the setpoint curve concept. In order to obtain that, a methodology is proposed. It allows for the assessment of the flow rate and pressure head that each pumping station has to provide for the proper functioning of the network while the minimum power consumption is kept. The methodology can be addressed in two ways: the discrete method and the continuous method. In the first method, a finite set of combinations is evaluated between pumping stations. In the continuous method, the search for the optimal solution is performed using optimization algorithms. In this paper, Hooke–Jeeves and Nelder–Mead algorithms are used. Both the hydraulics and the objective function used by the optimization are solved through EPANET and its Toolkit. Two case studies are evaluated, and the results of the application of the different methods are discussed.

  9. Network analysis as a tool for assessing environmental sustainability: applying the ecosystem perspective to a Danish water management system

    DEFF Research Database (Denmark)

    Pizzol, Massimo; Scotti, Marco; Thomsen, Marianne

    2013-01-01

    New insights into the sustainable use of natural resources in human systems can be gained through comparison with ecosystems via common indices. In both kinds of system, resources are processed by a number of users within a network, but we consider ecosystems as the only ones displaying sustainable...... patterns of growth and development. We applied Network Analysis (NA) for assessing the sustainability of a Danish municipal Water Management System (WMS). We identified water users within the WMS and represented their interactions as a network of water flows. We computed intensive and extensive indices...

  10. Applying behavior-analytic methodology to the science and practice of environmental enrichment in zoos and aquariums.

    Science.gov (United States)

    Alligood, Christina A; Dorey, Nicole R; Mehrkam, Lindsay R; Leighty, Katherine A

    2017-05-01

    Environmental enrichment in zoos and aquariums is often evaluated at two overlapping levels: published research and day-to-day institutional record keeping. Several authors have discussed ongoing challenges with small sample sizes in between-groups zoological research and have cautioned against the inappropriate use of inferential statistics (Shepherdson, , International Zoo Yearbook, 38, 118-124; Shepherdson, Lewis, Carlstead, Bauman, & Perrin, Applied Animal Behaviour Science, 147, 298-277; Swaisgood, , Applied Animal Behaviour Science, 102, 139-162; Swaisgood & Shepherdson, , Zoo Biology, 24, 499-518). Multi-institutional studies are the typically-prescribed solution, but these are expensive and difficult to carry out. Kuhar ( Zoo Biology, 25, 339-352) provided a reminder that inferential statistics are only necessary when one wishes to draw general conclusions at the population level. Because welfare is assessed at the level of the individual animal, we argue that evaluations of enrichment efficacy are often instances in which inferential statistics may be neither necessary nor appropriate. In recent years, there have been calls for the application of behavior-analytic techniques to zoo animal behavior management, including environmental enrichment (e.g., Bloomsmith, Marr, & Maple, , Applied Animal Behaviour Science, 102, 205-222; Tarou & Bashaw, , Applied Animal Behaviour Science, 102, 189-204). Single-subject (also called single-case, or small-n) designs provide a means of designing evaluations of enrichment efficacy based on an individual's behavior. We discuss how these designs might apply to research and practice goals at zoos and aquariums, contrast them with standard practices in the field, and give examples of how each could be successfully applied in a zoo or aquarium setting. © 2017 Wiley Periodicals, Inc.

  11. A study of penetration test for applying a remote monitoring system for virtual private network

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. S.; Park, I. J.; Min, K. S.; Choi, Y. M. [KAERI, Taejon (Korea, Republic of); Jo, D. K. [A3 Security Consulting Co., Seoul (Korea, Republic of)

    2003-07-01

    A penetration test has been performed to verify the vulnerability of Virtual Private Network that is substitute for communication method of an existing remote monitoring system. An existing RMS was used for the private telephone and the RMS was applied of all PWR in Korea. But, due to communication fee, IAEA wanted to replace current telephone line to the internet line to reduce transmission cost in operating remote monitoring system. The communication cost of telephone line was estimated about $66,000/yr. Internet technology would reduce the operating cost up to 1/5. The purpose of the penetration test was to demonstrate the security of the data and system against both various external and internal hacking scenarios. In most cases, hacker could not even identify the VPN system. In any cases, the system did not allow the access of the hacker to the system needless to say the data alteration or system shutdown. Two kinds of test method is chosen; one is external attack and another is internal attack. During the test, the hacking tool was used. The result of test was proved that VPN was secure against internal/external attack.

  12. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    Science.gov (United States)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-04-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

  13. Pediatric Emergency Care Applied Research Network head injuryprediction rules: on the basis of cost and effectiveness

    Science.gov (United States)

    Gökharman, Fatma Dilek; Aydın, Sonay; Fatihoğlu, Erdem; Koşar, Pınar Nercis

    2017-12-19

    Background/aim: Head injuries are commonly seen in the pediatric population. Noncontrast enhanced cranial CT is the method of choice to detect possible traumatic brain injury (TBI). Concerns about ionizing radiation exposure make the evaluation more challenging. The aim of this study was to evaluate the effectiveness of the Pediatric Emergency Care Applied Research Network (PECARN) rules in predicting clinically important TBI and to determine the amount of medical resource waste and unnecessary radiation exposure.Materials and methods: This retrospective study included 1041 pediatric patients presented to the emergency department. The patients were divided into subgroups of "appropriate for cranial CT", "not appropriate for cranial CT" and "cranial CT/observation of patient; both are appropriate". To determine the effectiveness of the PECARN rules, data were analyzed according to the presence of pathological findings Results: "Appropriate for cranial CT" results can predict pathology presence 118,056-fold compared to the "not appropriate for cranial CT" results. With "cranial CT/observation of patient; both are appropriate" results, pathology presence was predicted 11,457-fold compared to "not appropriate for cranial CT" results.Conclusion: PECARN rules can predict pathology presence successfully in pediatric TBI. Using PECARN can decrease resource waste and exposure to ionizing radiation.

  14. Statistical comparisons of Savannah River anemometer data applied to quality control of instrument networks

    International Nuclear Information System (INIS)

    Porch, W.M.; Dickerson, M.H.

    1976-08-01

    Continuous monitoring of extensive meteorological instrument arrays is a requirement in the study of important mesoscale atmospheric phenomena. The phenomena include pollution transport prediction from continuous area sources, or one time releases of toxic materials and wind energy prospecting in areas of topographic enhancement of the wind. Quality control techniques that can be applied to these data to determine if the instruments are operating within their prescribed tolerances were investigated. Savannah River Plant data were analyzed with both independent and comparative statistical techniques. The independent techniques calculate the mean, standard deviation, moments about the mean, kurtosis, skewness, probability density distribution, cumulative probability and power spectra. The comparative techniques include covariance, cross-spectral analysis and two dimensional probability density. At present the calculating and plotting routines for these statistical techniques do not reside in a single code so it is difficult to ascribe independent memory size and computation time accurately. However, given the flexibility of a data system which includes simple and fast running statistics at the instrument end of the data network (ASF) and more sophisticated techniques at the computational end (ACF) a proper balance will be attained. These techniques are described in detail and preliminary results are presented

  15. A study of penetration test for applying a remote monitoring system for virtual private network

    International Nuclear Information System (INIS)

    Kim, J. S.; Kim, J. S.; Park, I. J.; Min, K. S.; Choi, Y. M.; Jo, D. K.

    2003-01-01

    A penetration test has been performed to verify the vulnerability of Virtual Private Network that is substitute for communication method of an existing remote monitoring system. An existing RMS was used for the private telephone and the RMS was applied of all PWR in Korea. But, due to communication fee, IAEA wanted to replace current telephone line to the internet line to reduce transmission cost in operating remote monitoring system. The communication cost of telephone line was estimated about $66,000/yr. Internet technology would reduce the operating cost up to 1/5. The purpose of the penetration test was to demonstrate the security of the data and system against both various external and internal hacking scenarios. In most cases, hacker could not even identify the VPN system. In any cases, the system did not allow the access of the hacker to the system needless to say the data alteration or system shutdown. Two kinds of test method is chosen; one is external attack and another is internal attack. During the test, the hacking tool was used. The result of test was proved that VPN was secure against internal/external attack

  16. Network meta-analysis-highly attractive but more methodological research is needed

    NARCIS (Netherlands)

    Li, Tianjing; Puhan, Milo A.; Vedula, Swaroop S.; Singh, Sonal; Dickersin, Kay; Cameron, Chris; Goodman, Steven N.; Mills, Edward; Musch, David; ter Riet, Gerben; Robinson, Karen; Schmid, Christopher; Song, Fujian; Thorlund, Kristian; Trikalinos, Thomas

    2011-01-01

    Network meta-analysis, in the context of a systematic review, is a meta-analysis in which multiple treatments (that is, three or more) are being compared using both direct comparisons of interventions within randomized controlled trials and indirect comparisons across trials based on a common

  17. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    Linux® is a registered trademark of Linus Torvalds. NVIDIA ® is a registered trademark of NVIDIA Corporation. Oracle® is a registered trademark of...two NVIDIA ® GTX580 GPUs [3]. Therefore, for this initial work, we decided to concentrate on small networks and small datasets until the methods are

  18. Alignment of the UMLS semantic network with BioTop: Methodology and assessment

    NARCIS (Netherlands)

    S. Schulz; E. Beisswanger (Elena); L. van den Hoek (László); O. Bodenreider (Olivier); E.M. van Mulligen (Erik)

    2009-01-01

    textabstractMotivation: For many years, the Unified Medical Language System (UMLS) semantic network (SN) has been used as an upper-level semantic framework for the categorization of terms from terminological resources in biomedicine. BioTop has recently been developed as an upper-level ontology for

  19. Applying Central Composite Design and Response Surface Methodology to Optimize Growth and Biomass Production of Haemophilus influenzae Type b.

    Science.gov (United States)

    Momen, Seyed Bahman; Siadat, Seyed Davar; Akbari, Neda; Ranjbar, Bijan; Khajeh, Khosro

    2016-06-01

    Haemophilus influenzae type b (Hib) is the leading cause of bacterial meningitis, otitis media, pneumonia, cellulitis, bacteremia, and septic arthritis in infants and young children. The Hib capsule contains the major virulence factor, and is composed of polyribosyl ribitol phosphate (PRP) that can induce immune system response. Vaccines consisting of Hib capsular polysaccharide (PRP) conjugated to a carrier protein are effective in the prevention of the infections. However, due to costly processes in PRP production, these vaccines are too expensive. To enhance biomass, in this research we focused on optimizing Hib growth with respect to physical factors such as pH, temperature, and agitation by using a response surface methodology (RSM). We employed a central composite design (CCD) and a response surface methodology to determine the optimum cultivation conditions for growth and biomass production of H. influenzae type b. The treatment factors investigated were initial pH, agitation, and temperature, using shaking flasks. After Hib cultivation and determination of dry biomass, analysis of experimental data was performed by the RSM-CCD. The model showed that temperature and pH had an interactive effect on Hib biomass production. The dry biomass produced in shaking flasks was about 5470 mg/L, which was under an initial pH of 8.5, at 250 rpm and 35° C. We found CCD and RSM very effective in optimizing Hib culture conditions, and Hib biomass production was greatly influenced by pH and incubation temperature. Therefore, optimization of the growth factors to maximize Hib production can lead to 1) an increase in bacterial biomass and PRP productions, 2) lower vaccine prices, 3) vaccination of more susceptible populations, and 4) lower risk of Hib infections.

  20. Applying neural networks to control the TFTR neutral beam ion sources

    International Nuclear Information System (INIS)

    Lagin, L.

    1992-01-01

    This paper describes the application of neural networks to the control of the neutral beam long-pulse positive ion source accelerators on the Tokamak Fusion Test Reactor (TFTR) at Princeton University. Neural networks were used to learn how the operators adjust the control setpoints when running these sources. The data sets used to train these networks were derived from a large database containing actual setpoints and power supply waveform calculations for the 1990 run period. The networks learned what the optimum control setpoints should initially be set based uon desired accel voltage and perveance levels. Neural networks were also used to predict the divergence of the ion beam

  1. Theory and design of broadband matching networks applied electricity and electronics

    CERN Document Server

    Chen, Wai-Kai

    1976-01-01

    Theory and Design of Broadband Matching Networks centers on the network theory and its applications to the design of broadband matching networks and amplifiers. Organized into five chapters, this book begins with a description of the foundation of network theory. Chapter 2 gives a fairly complete exposition of the scattering matrix associated with an n-port network. Chapter 3 considers the approximation problem along with a discussion of the approximating functions. Chapter 4 explains the Youla's theory of broadband matching by illustrating every phase of the theory with fully worked out examp

  2. A multi-methodological MR resting state network analysis to assess the changes in brain physiology of children with ADHD.

    Directory of Open Access Journals (Sweden)

    Benito de Celis Alonso

    Full Text Available The purpose of this work was to highlight the neurological differences between the MR resting state networks of a group of children with ADHD (pre-treatment and an age-matched healthy group. Results were obtained using different image analysis techniques. A sample of n = 46 children with ages between 6 and 12 years were included in this study (23 per cohort. Resting state image analysis was performed using ReHo, ALFF and ICA techniques. ReHo and ICA represent connectivity analyses calculated with different mathematical approaches. ALFF represents an indirect measurement of brain activity. The ReHo and ICA analyses suggested differences between the two groups, while the ALFF analysis did not. The ReHo and ALFF analyses presented differences with respect to the results previously reported in the literature. ICA analysis showed that the same resting state networks that appear in healthy volunteers of adult age were obtained for both groups. In contrast, these networks were not identical when comparing the healthy and ADHD groups. These differences affected areas for all the networks except the Right Memory Function network. All techniques employed in this study were used to monitor different cerebral regions which participate in the phenomenological characterization of ADHD patients when compared to healthy controls. Results from our three analyses indicated that the cerebellum and mid-frontal lobe bilaterally for ReHo, the executive function regions in ICA, and the precuneus, cuneus and the clacarine fissure for ALFF, were the "hubs" in which the main inter-group differences were found. These results do not just help to explain the physiology underlying the disorder but open the door to future uses of these methodologies to monitor and evaluate patients with ADHD.

  3. Wavelet based artificial neural network applied for energy efficiency enhancement of decoupled HVAC system

    International Nuclear Information System (INIS)

    Jahedi, G.; Ardehali, M.M.

    2012-01-01

    Highlights: ► In HVAC systems, temperature and relative humidity are coupled and dynamic mathematical models are non-linear. ► A wavelet-based ANN is used in series with an infinite impulse response filter for self tuning of PD controller. ► Energy consumption is evaluated for a decoupled bi-linear HVAC system with variable air volume and variable water flow. ► Substantial enhancement in energy efficiency is realized, when the gain coefficients of PD controllers are tuned adaptively. - Abstract: Control methodologies could lower energy demand and consumption of heating, ventilating and air conditioning (HVAC) systems and, simultaneously, achieve better comfort conditions. However, the application of classical controllers is unsatisfactory as HVAC systems are non-linear and the control variables such as temperature and relative humidity (RH) inside the thermal zone are coupled. The objective of this study is to develop and simulate a wavelet-based artificial neural network (WNN) for self tuning of a proportional-derivative (PD) controller for a decoupled bi-linear HVAC system with variable air volume and variable water flow responsible for controlling temperature and RH of a thermal zone, where thermal comfort and energy consumption of the system are evaluated. To achieve the objective, a WNN is used in series with an infinite impulse response (IIR) filter for faster and more accurate identification of system dynamics, as needed for on-line use and off-line batch mode training. The WNN-IIR algorithm is used for self-tuning of two PD controllers for temperature and RH. The simulation results show that the WNN-IIR controller performance is superior, as compared with classical PD controller. The enhancement in efficiency of the HVAC system is accomplished due to substantially lower consumption of energy during the transient operation, when the gain coefficients of PD controllers are tuned in an adaptive manner, as the steady state setpoints for temperature and

  4. A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the CONSORT guidelines.

    Science.gov (United States)

    Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea

    2017-07-18

    The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.

  5. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  6. Network analysis of metaphors in immigrants' accounts of techno-science: A theoretical-methodological proposal.

    Directory of Open Access Journals (Sweden)

    Jesús René Luna Hernández

    2008-05-01

    Full Text Available Immigration is currently a source of much, sometimes divisive, social debate. However, what the immigrants themselves have to say about their own situation often goes unreported. One of the least-studied aspects of immigration is the immigrant's relationship with new information and communication technologies (ICTs. Here I suggest, as a project for investigation, the analysis of the metaphors that immigrants (from sub-Saharan Africa to Spain use to represent ICTs and the practices that surround them. These immigrants are of special interest because they come from places where contact with ICTs is minimal, and also because the jobs they find on arrival in Spain tend not to involve the use of ICTs. I propose the adoption of a perspective in which context and social relationships take precedence over individualistic factors, such as attitudes or attributions. I argue that social network analysis, and more specifically, discursive network analysis, is the investigative strategy most appropriate to such a project.

  7. eXtended variational quasicontinuum methodology for lattice networks with damage and crack propagation

    Czech Academy of Sciences Publication Activity Database

    Rokoš, O.; Peerlings, R. H. J.; Zeman, Jan

    2017-01-01

    Roč. 320, č. 1 (2017), s. 769-792 ISSN 0045-7825 R&D Projects: GA ČR(CZ) GF16-34894L Institutional support: RVO:67985556 Keywords : Lattice networks * Quasicontinuum method * Damage * Extended finite element method * Multiscale modelling * Variational formulation Subject RIV: JJ - Other Materials OBOR OECD: Materials engineering Impact factor: 3.949, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/zeman-0475349.pdf

  8. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  9. Structural modification of covalent-bonded networks: on some methodological resolutions for binary chalcogenide glasses

    Energy Technology Data Exchange (ETDEWEB)

    Shpotyuk, M; Shpotyuk, Ya; Shpotyuk, O, E-mail: shpotyukmy@yahoo.com [Lviv Scientific Research Institute of Materials of SRC ' Carat' , 212, Stryjska str., Lviv, 79031 (Ukraine)

    2011-04-01

    New methodology to estimate efficiency of externally-induced structural modification in chalcogenide glasses is developed. This approach is grounded on the assumption that externally-induced structural modification is fully associated with destruction-polymerization transformations, which reveal themselves as local misbalances in covalent bond distribution, normal atomic coordination and intrinsic electrical fields. The input of each of these components into the total value of structural modification efficiency was probed for quasibinary (As{sub 2}S{sub 3}){sub 100-x}(Sb{sub 2}S{sub 3}){sub x} ChG.

  10. Structural modification of covalent-bonded networks: on some methodological resolutions for binary chalcogenide glasses

    International Nuclear Information System (INIS)

    Shpotyuk, M; Shpotyuk, Ya; Shpotyuk, O

    2011-01-01

    New methodology to estimate efficiency of externally-induced structural modification in chalcogenide glasses is developed. This approach is grounded on the assumption that externally-induced structural modification is fully associated with destruction-polymerization transformations, which reveal themselves as local misbalances in covalent bond distribution, normal atomic coordination and intrinsic electrical fields. The input of each of these components into the total value of structural modification efficiency was probed for quasibinary (As 2 S 3 ) 100-x (Sb 2 S 3 ) x ChG.

  11. Dynamic event Tress applied to sequences Full Spectrum LOCA. Calculating the frequency of excedeence of damage by integrated Safety Analysis Methodology

    International Nuclear Information System (INIS)

    Gomez-Magan, J. J.; Fernandez, I.; Gil, J.; Marrao, H.; Queral, C.; Gonzalez-Cadelo, J.; Montero-Mayorga, J.; Rivas, J.; Ibane-Llano, C.; Izquierdo, J. M.; Sanchez-Perea, M.; Melendez, E.; Hortal, J.

    2013-01-01

    The Integrated Safety Analysis (ISA) methodology, developed by the Spanish Nuclear Safety council (CSN), has been applied to obtain the dynamic Event Trees (DETs) for full spectrum Loss of Coolant Accidents (LOCAs) of a Westinghouse 3-loop PWR plant. The purpose of this ISA application is to obtain the Damage Excedence Frequency (DEF) for the LOCA Event Tree by taking into account the uncertainties in the break area and the operator actuation time needed to cool down and de pressurize reactor coolant system by means of steam generator. Simulations are performed with SCAIS, a software tool which includes a dynamic coupling with MAAP thermal hydraulic code. The results show the capability of the ISA methodology to obtain the DEF taking into account the time uncertainty in human actions. (Author)

  12. An Improvement in Biodiesel Production from Waste Cooking Oil by Applying Thought Multi-Response Surface Methodology Using Desirability Functions

    Directory of Open Access Journals (Sweden)

    Marina Corral Bobadilla

    2017-01-01

    Full Text Available The exhaustion of natural resources has increased petroleum prices and the environmental impact of oil has stimulated the search for an alternative source of energy such as biodiesel. Waste cooking oil is a potential replacement for vegetable oils in the production of biodiesel. Biodiesel is synthesized by direct transesterification of vegetable oils, which is controlled by several inputs or process variables, including the dosage of catalyst, process temperature, mixing speed, mixing time, humidity and impurities of waste cooking oil that was studied in this case. Yield, turbidity, density, viscosity and higher heating value are considered as outputs. This paper used multi-response surface methodology (MRS with desirability functions to find the best combination of input variables used in the transesterification reactions to improve the production of biodiesel. In this case, several biodiesel optimization scenarios have been proposed. They are based on a desire to improve the biodiesel yield and the higher heating value, while decreasing the viscosity, density and turbidity. The results demonstrated that, although waste cooking oil was collected from various sources, the dosage of catalyst is one of the most important variables in the yield of biodiesel production, whereas the viscosity obtained was similar in all samples of the biodiesel that was studied.

  13. Computerization of effluent management and external dose calculation using the 'ODCM' methodology applied to Almaraz-NPP

    International Nuclear Information System (INIS)

    Garcia Gutierrez, M.E.; Sustacha Duo, D.

    1993-01-01

    The ODCM (Offsite Dose Calculation Manual), the official operational document for all nuclear power plants develops the details for the technical specifications for discharges and governs their practical application. The use of ODCM methodology for managing and controlling data associated with radioactive discharges, as well as the subsequent processing of this data to assess the radiological impact, requires and generates a large volume of data, which demands the frequent application of laborious and complex calculation processes, making computerization necessary. The computer application created for Almaraz NPP has the capacity to store and manage data on all discharges, evaluate their effects, presents reports and copies the information to be sent periodically to the CSN (Spanish Nuclear Regulatory Commission) on a magnetic tape. The radiological impact of an actual or possible discharge can be evaluated at anytime and, furthermore, general or particular reports and graphs on the discharges and doses over time can be readily obtained. The application is run on a personal computer under a relational database management system. This interactive application is based on menus and windows. (author)

  14. Applying TOGAF for e-government implementation based on service oriented architecture methodology towards good government governance

    Science.gov (United States)

    Hodijah, A.; Sundari, S.; Nugraha, A. C.

    2018-05-01

    As a Local Government Agencies who perform public services, General Government Office already has utilized Reporting Information System of Local Government Implementation (E-LPPD). However, E-LPPD has upgrade limitation for the integration processes that cannot accommodate General Government Offices’ needs in order to achieve Good Government Governance (GGG), while success stories of the ultimate goal of e-government implementation requires good governance practices. Currently, citizen demand public services as private sector do, which needs service innovation by utilizing the legacy system as a service based e-government implementation, while Service Oriented Architecture (SOA) to redefine a business processes as a set of IT enabled services and Enterprise Architecture from the Open Group Architecture Framework (TOGAF) as a comprehensive approach in redefining business processes as service innovation towards GGG. This paper takes a case study on Performance Evaluation of Local Government Implementation (EKPPD) system on General Government Office. The results show that TOGAF will guide the development of integrated business processes of EKPPD system that fits good governance practices to attain GGG with SOA methodology as technical approach.

  15. Integrating network ecology with applied conservation: a synthesis and guide to implementation.

    Science.gov (United States)

    Kaiser-Bunbury, Christopher N; Blüthgen, Nico

    2015-07-10

    Ecological networks are a useful tool to study the complexity of biotic interactions at a community level. Advances in the understanding of network patterns encourage the application of a network approach in other disciplines than theoretical ecology, such as biodiversity conservation. So far, however, practical applications have been meagre. Here we present a framework for network analysis to be harnessed to advance conservation management by using plant-pollinator networks and islands as model systems. Conservation practitioners require indicators to monitor and assess management effectiveness and validate overall conservation goals. By distinguishing between two network attributes, the 'diversity' and 'distribution' of interactions, on three hierarchical levels (species, guild/group and network) we identify seven quantitative metrics to describe changes in network patterns that have implications for conservation. Diversity metrics are partner diversity, vulnerability/generality, interaction diversity and interaction evenness, and distribution metrics are the specialization indices d' and [Formula: see text] and modularity. Distribution metrics account for sampling bias and may therefore be suitable indicators to detect human-induced changes to plant-pollinator communities, thus indirectly assessing the structural and functional robustness and integrity of ecosystems. We propose an implementation pathway that outlines the stages that are required to successfully embed a network approach in biodiversity conservation. Most importantly, only if conservation action and study design are aligned by practitioners and ecologists through joint experiments, are the findings of a conservation network approach equally beneficial for advancing adaptive management and ecological network theory. We list potential obstacles to the framework, highlight the shortfall in empirical, mostly experimental, network data and discuss possible solutions. Published by Oxford University

  16. A systematic review of methodology applied during preclinical anesthetic neurotoxicity studies: important issues and lessons relevant to the design of future clinical research.

    Science.gov (United States)

    Disma, Nicola; Mondardini, Maria C; Terrando, Niccolò; Absalom, Anthony R; Bilotta, Federico

    2016-01-01

    Preclinical evidence suggests that anesthetic agents harm the developing brain thereby causing long-term neurocognitive impairments. It is not clear if these findings apply to humans, and retrospective epidemiological studies thus far have failed to show definitive evidence that anesthetic agents are harmful to the developing human brain. The aim of this systematic review was to summarize the preclinical studies published over the past decade, with a focus on methodological issues, to facilitate the comparison between different preclinical studies and inform better design of future trials. The literature search identified 941 articles related to the topic of neurotoxicity. As the primary aim of this systematic review was to compare methodologies applied in animal studies to inform future trials, we excluded a priori all articles focused on putative mechanism of neurotoxicity and the neuroprotective agents. Forty-seven preclinical studies were finally included in this review. Methods used in these studies were highly heterogeneous-animals were exposed to anesthetic agents at different developmental stages, in various doses and in various combinations with other drugs, and overall showed diverse toxicity profiles. Physiological monitoring and maintenance of physiological homeostasis was variable and the use of cognitive tests was generally limited to assessment of specific brain areas, with restricted translational relevance to humans. Comparison between studies is thus complicated by this heterogeneous methodology and the relevance of the combined body of literature to humans remains uncertain. Future preclinical studies should use better standardized methodologies to facilitate transferability of findings from preclinical into clinical science. © 2015 John Wiley & Sons Ltd.

  17. Bridging the Gap in Port Security; Network Centric Theory Applied to Public/Private Collaboration

    National Research Council Canada - National Science Library

    Wright, Candice L

    2007-01-01

    ...." Admiral Thad Allen, 2007 The application of Network Centric Warfare theory enables all port stakeholders to better prepare for a disaster through increased information sharing and collaboration...

  18. Network-Centric Warfare: Implications for Applying the Principles of War

    National Research Council Canada - National Science Library

    Caneva, Joseph

    1999-01-01

    Noting the competitive advantage that a computer network system completely integrated into a firm's structure and operations has provided to businesses, individuals have begun to argue that adoption...

  19. Conception and development of an optical methodology applied to long-distance measurement of suspension bridges dynamic displacement

    International Nuclear Information System (INIS)

    Martins, L Lages; Ribeiro, A Silva; Rebordão, J M

    2013-01-01

    This paper describes the conception and development of an optical system applied to suspension bridge structural monitoring, aiming real-time and long-distance measurement of dynamical three-dimensional displacement, namely, in the central section of the main span. The main innovative issues related to this optical approach are described and a comparison with other optical and non-optical measurement systems is performed. Moreover, a computational simulator tool developed for the optical system design and validation of the implemented image processing and calculation algorithms is also presented

  20. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  1. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  2. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  3. 3D Recording methodology applied to the Grotta Scritta Prehistoric Rock-Shelter in Olmeta-Di-Capocorso (Corsica, France)

    Science.gov (United States)

    Grussenmeyer, P.; Burens, A.; Guillemin, S.; Alby, E.; Allegrini Simonetti, F.; Marchetti, M.-L.

    2015-08-01

    The Grotta Scritta I prehistoric site is located on the west side of Cap Corse, in the territory of the municipality of Olmeta-di- Capocorso (Haute-Corse, France). This rock shelter is located on a western spur of the mountains La Serra, at 412 m height above sea level. In the regional context of a broad set of megalithic burial sites (regions Nebbiu and Agriates) and a rich insular prehistoric rock art with several engraved patterns (mainly geometric), the Grotta Scritta is the only site with painted depictions of Corsica. Around twenty parietal depictions are arranged in the upper part of the rock-shelter and takes advantage of the microtopography of the wall. Today, the Grotta Scritta is a vulnerable site, made fragile by the action of time and man. The 3D scanning of the rockshelter and paintings of the Grotta Scritta was carried out by surveyors and archaeologists from INSA Strasbourg and from UMR 5602 GEODE (Toulouse), by combining accurate terrestrial laser scanning and photogrammetry techniques. These techniques are based on a full 3D documentation without contact of the rock-shelter paintings. The paper presents the data acquisition methodology followed by an overview of data processing solutions based on both imaging and laser scanning. Several deliverables as point clouds, meshed models, textured models and orthoimages are proposed for the documentation. Beyond their usefulness in terms of valorization, communication and virtual restitution, the proposed models also provide support tools for the analysis and perception of the complexity of the volumes of the shelter (namely for the folded forms of the dome housing the paintings) as well as for the accuracy of the painted depictions recorded on the orthophotos processed from the 3D model.

  4. Multivariate temporal pattern analysis applied to the study of rat behavior in the elevated plus maze: methodological and conceptual highlights.

    Science.gov (United States)

    Casarrubea, M; Magnusson, M S; Roy, V; Arabo, A; Sorbera, F; Santangelo, A; Faulisi, F; Crescimanno, G

    2014-08-30

    Aim of this article is to illustrate the application of a multivariate approach known as t-pattern analysis in the study of rat behavior in elevated plus maze. By means of this multivariate approach, significant relationships among behavioral events in the course of time can be described. Both quantitative and t-pattern analyses were utilized to analyze data obtained from fifteen male Wistar rats following a trial 1-trial 2 protocol. In trial 2, in comparison with the initial exposure, mean occurrences of behavioral elements performed in protected zones of the maze showed a significant increase counterbalanced by a significant decrease of mean occurrences of behavioral elements in unprotected zones. Multivariate t-pattern analysis, in trial 1, revealed the presence of 134 t-patterns of different composition. In trial 2, the temporal structure of behavior become more simple, being present only 32 different t-patterns. Behavioral strings and stripes (i.e. graphical representation of each t-pattern onset) of all t-patterns were presented both for trial 1 and trial 2 as well. Finally, percent distributions in the three zones of the maze show a clear-cut increase of t-patterns in closed arm and a significant reduction in the remaining zones. Results show that previous experience deeply modifies the temporal structure of rat behavior in the elevated plus maze. In addition, this article, by highlighting several conceptual, methodological and illustrative aspects on the utilization of t-pattern analysis, could represent a useful background to employ such a refined approach in the study of rat behavior in elevated plus maze. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Study on the methodology for hydrogeological site descriptive modelling by discrete fracture networks

    International Nuclear Information System (INIS)

    Tanaka, Tatsuya; Ando, Kenichi; Hashimoto, Shuuji; Saegusa, Hiromitsu; Takeuchi, Shinji; Amano, Kenji

    2007-01-01

    This study aims to establish comprehensive techniques for site descriptive modelling considering the hydraulic heterogeneity due to the Water Conducting Features in fractured rocks. The WCFs was defined by the interpretation and integration of geological and hydrogeological data obtained from the deep borehole investigation campaign in the Mizunami URL project and Regional Hydrogeological Study. As a result of surface based investigation phase, the block-scale hydrogeological descriptive model was generated using hydraulic discrete fracture networks. Uncertainties and remaining issues associated with the assumption in interpreting the data and its modelling were addressed in a systematic way. (author)

  6. Assessment of the territorial suitability for the creation of the greenways networks: Methodological application in the Sicilian landscape context

    Directory of Open Access Journals (Sweden)

    Marzia Quattrone

    2017-09-01

    Full Text Available The proposal to create greenways networks for the enhancement of more or less vast areas is of great importance to territory planning. The paths, which are overlaid on pre-existing linear patterns, promote the development of endogenous resources and facilitate direct learning of the territory’s historical, cultural, environmental and landscaping assets. Rural areas can be strongly influenced by setting up a greenways network, as their use not only promotes the exchange of knowledge between users and inhabitants, but also encourages the enjoyment of various areas (agricultural landscapes, scattered cultural heritage, protected environments that would otherwise be inaccessible due to their distance from the traditional routes. Altogether, this favours the introduction of economic activities based on their typical characteristics. This work identifies the appropriate road infrastructure, available in the former Province of Syracuse (East Sicily, for building greenways networks that will best contribute to the valorisation of their surrounding territory. This work assigns great importance to landscape features as factors of tourist and cultural attraction. We have used the multi-criteria analysis associated with geographic information system (GIS. We have weighed and mapped numerous indicators to define the territory’s infrastructural, landscape, cultural, and tourist resources, meaning those able to increase the use of the territory and/or that determine attractiveness for the population. The GIS analysis allowed us to develop numerous intermediate maps, whose information helped us to draw up the final map illustrating the suitability of the existing infrastructures that could be useful while planning of a greenway network. Such infrastructures could be the subject of specific plans or detailed projects aimed at enhancing the pre-existing resources of a rural territory. This study, although referring to a defined territory, is methodologically

  7. Scalability analysis methodology for passive optical interconnects in data center networks using PAM

    Science.gov (United States)

    Lin, R.; Szczerba, Krzysztof; Agrell, Erik; Wosinska, Lena; Tang, M.; Liu, D.; Chen, J.

    2017-11-01

    A framework is developed for modeling the fundamental impairments in optical datacenter interconnects, i.e., the power loss and the receiver noises. This framework makes it possible, to analyze the trade-offs between data rates, modulation order, and number of ports that can be supported in optical interconnect architectures, while guaranteeing that the required signal-to-noise ratios are satisfied. To the best of our knowledge, this important assessment methodology is not yet available. As a case study, the trade-offs are investigated for three coupler-based top-of-rack interconnect architectures, which suffer from serious insertion loss. The results show that using single-port transceivers with 10 GHz bandwidth, avalanche photodiode detectors, and quadratical pulse amplitude modulation, more than 500 ports can be supported.

  8. Applying the flow-capturing location-allocation model to an authentic network: Edmonton, Canada

    NARCIS (Netherlands)

    M.J. Hodgson (John); K.E. Rosing (Kenneth); A.L.G. Storrier (Leontien)

    1996-01-01

    textabstractTraditional location-allocation models aim to locate network facilities to optimally serve demand expressed as weights at nodes. For some types of facilities demand is not expressed at nodes, but as passing network traffic. The flow-capturing location-allocation model responds to this

  9. A Network Inference Workflow Applied to Virulence-Related Processes in Salmonella typhimurium

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Ronald C.; Singhal, Mudita; Weller, Jennifer B.; Khoshnevis, Saeed; Shi, Liang; McDermott, Jason E.

    2009-04-20

    Inference of the structure of mRNA transcriptional regulatory networks, protein regulatory or interaction networks, and protein activation/inactivation-based signal transduction networks are critical tasks in systems biology. In this article we discuss a workflow for the reconstruction of parts of the transcriptional regulatory network of the pathogenic bacterium Salmonella typhimurium based on the information contained in sets of microarray gene expression data now available for that organism, and describe our results obtained by following this workflow. The primary tool is one of the network inference algorithms deployed in the Software Environment for BIological Network Inference (SEBINI). Specifically, we selected the algorithm called Context Likelihood of Relatedness (CLR), which uses the mutual information contained in the gene expression data to infer regulatory connections. The associated analysis pipeline automatically stores the inferred edges from the CLR runs within SEBINI and, upon request, transfers the inferred edges into either Cytoscape or the plug-in Collective Analysis of Biological of Biological Interaction Networks (CABIN) tool for further post-analysis of the inferred regulatory edges. The following article presents the outcome of this workflow, as well as the protocols followed for microarray data collection, data cleansing, and network inference. Our analysis revealed several interesting interactions, functional groups, metabolic pathways, and regulons in S. typhimurium.

  10. Privacy and Generation Y: Applying Library Values to Social Networking Sites

    Science.gov (United States)

    Fernandez, Peter

    2010-01-01

    Librarians face many challenges when dealing with issues of privacy within the mediated space of social networking sites. Conceptually, social networking sites differ from libraries on privacy as a value. Research about Generation Y students, the primary clientele of undergraduate libraries, can inform librarians' relationship to this important…

  11. Applying agent-based control to mitigate overvoltage in distribution network

    NARCIS (Netherlands)

    Viyathukattuva Mohamed Ali, M.M.; Nguyen, P.H.; Kling, W.L.

    2014-01-01

    Increasing share of distributed renewable energy sources (DRES) in distribution networks raises new operational and power quality challenges for network operators such as handling overvoltage. Such technical constraint limits further penetration of DRES in low voltage (LV) and medium voltage (MV)

  12. Multisite, multimodal neuroimaging of chronic urological pelvic pain: Methodology of the MAPP Research Network

    Directory of Open Access Journals (Sweden)

    Jeffry R. Alger

    2016-01-01

    Full Text Available The Multidisciplinary Approach to the Study of Chronic Pelvic Pain (MAPP Research Network is an ongoing multi-center collaborative research group established to conduct integrated studies in participants with urologic chronic pelvic pain syndrome (UCPPS. The goal of these investigations is to provide new insights into the etiology, natural history, clinical, demographic and behavioral characteristics, search for new and evaluate candidate biomarkers, systematically test for contributions of infectious agents to symptoms, and conduct animal studies to understand underlying mechanisms for UCPPS. Study participants were enrolled in a one-year observational study and evaluated through a multisite, collaborative neuroimaging study to evaluate the association between UCPPS and brain structure and function. 3D T1-weighted structural images, resting-state fMRI, and high angular resolution diffusion MRI were acquired in five participating MAPP Network sites using 8 separate MRI hardware and software configurations. We describe the neuroimaging methods and procedures used to scan participants, the challenges encountered in obtaining data from multiple sites with different equipment/software, and our efforts to minimize site-to-site variation.

  13. Parametric optimization for floating drum anaerobic bio-digester using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    S. Sathish

    2016-12-01

    Full Text Available The main purpose of this study to increase the optimal conditions for biogas yield from anaerobic digestion of agricultural waste (Rice Straw using Response Surface Methodology (RSM and Artificial Neural Network (ANN. In the development of predictive models temperature, pH, substrate concentration and agitation time are conceived as model variables. The experimental results show that the liner model terms of temperature, substrate concentration and pH, agitation time have significance of interactive effects (p < 0.05. The results manifest that the optimum process parameters affected on biogas yield increase from the ANN model when compared to RSM model. The ANN model indicates that it is much more accurate and reckons the values of maximum biogas yield when compared to RSM model.

  14. A Methodology to Reduce the Computational Effort in the Evaluation of the Lightning Performance of Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ilaria Bendato

    2016-11-01

    Full Text Available The estimation of the lightning performance of a power distribution network is of great importance to design its protection system against lightning. An accurate evaluation of the number of lightning events that can create dangerous overvoltages requires a huge computational effort, as it implies the adoption of a Monte Carlo procedure. Such a procedure consists of generating many different random lightning events and calculating the corresponding overvoltages. The paper proposes a methodology to deal with the problem in two computationally efficient ways: (i finding out the minimum number of Monte Carlo runs that lead to reliable results; and (ii setting up a procedure that bypasses the lightning field-to-line coupling problem for each Monte Carlo run. The proposed approach is shown to provide results consistent with existing approaches while exhibiting superior Central Processing Unit (CPU time performances.

  15. Proposing C4ISR Architecture Methodology for Homeland Security

    National Research Council Canada - National Science Library

    Farah-Stapleton, Monica F; Dimarogonas, James; Eaton, Rodney; Deason, Paul J

    2004-01-01

    This presentation presents how a network architecture methodology developed for the Army's Future Force could be applied to the requirements of Civil Support, Homeland Security/Homeland Defense (CS HLS/HLD...

  16. Quality assurance and quality control methodologies used within the austrian UV monitoring network

    International Nuclear Information System (INIS)

    Mario, B.

    2004-01-01

    The Austrian UVB monitoring network is operational since 1997. Nine detectors for measuring erythemally weighted solar UV irradiance are distributed over Austria in order to cover the main populated areas as well as different levels of altitude. The detectors are calibrated to indicate the UV-Index, the internationally agreed unit for erythemally weighted solar UV irradiance. Calibration is carried out in the laboratory for determination of spectral sensitivity of each detector, and under the sun for absolute comparison with a well-calibrated, double-monochromator spectro-radiometer. For the conversion from detector-weighted units to erythemally weighted units a lookup table is used, which is calculated using a radiative transfer model and which reflects the dependence of the conversion on the solar zenith angle and total ozone content of the atmosphere. The uncertainty of the calibration is about ±7%, dominated by the uncertainty of the calibration lamp for the spectro-radiometer (±4%). The long-term stability of this type of detectors has been found to be not satisfactory. Therefore, routinely every year all detectors are completely re-calibrated. Variations of the calibration factors up to ±10% are found. Thus, during routine operation, several measures take place for quality control. The measured data are compared to results of model calculations with a radiative transfer model, where clear sky and an aerosol-free atmosphere are assumed. At each site, the UV data are also compared with data from a co-located pyrano-meter measuring total solar irradiance. These two radiation quantities are well correlated, especially on clear days and when the ozone content is taken into account. If suspicious measurements are found for one detector in the network, a well-calibrated travelling reference detector of the same type is set up side-by-side, allowing the identification of relative differences of ∼3%. If necessary, a recalibration is carried out. As the main aim

  17. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    Science.gov (United States)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  18. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network.

    Science.gov (United States)

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-07-12

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.

  19. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network

    Science.gov (United States)

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-01-01

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148

  20. A Bayesian Network methodology for railway risk, safety and decision support

    OpenAIRE

    Mahboob, Qamar

    2014-01-01

    For railways, risk analysis is carried out to identify hazardous situations and their consequences. Until recently, classical methods such as Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) were applied in modelling the linear and logically deterministic aspects of railway risks, safety and reliability. However, it has been proven that modern railway systems are rather complex, involving multi-dependencies between system variables and uncertainties about these dependencies. For train ...

  1. Methodology for Developing Hydrological Models Based on an Artificial Neural Network to Establish an Early Warning System in Small Catchments

    Directory of Open Access Journals (Sweden)

    Ivana Sušanj

    2016-01-01

    Full Text Available In some situations, there is no possibility of hazard mitigation, especially if the hazard is induced by water. Thus, it is important to prevent consequences via an early warning system (EWS to announce the possible occurrence of a hazard. The aim and objective of this paper are to investigate the possibility of implementing an EWS in a small-scale catchment and to develop a methodology for developing a hydrological prediction model based on an artificial neural network (ANN as an essential part of the EWS. The methodology is implemented in the case study of the Slani Potok catchment, which is historically recognized as a hazard-prone area, by establishing continuous monitoring of meteorological and hydrological parameters to collect data for the training, validation, and evaluation of the prediction capabilities of the ANN model. The model is validated and evaluated by visual and common calculation approaches and a new evaluation for the assessment. This new evaluation is proposed based on the separation of the observed data into classes based on the mean data value and the percentages of classes above or below the mean data value as well as on the performance of the mean absolute error.

  2. Study and methodology development for quality control in the production process of iodine-125 radioactive sealed sources applied to brachytherapy

    International Nuclear Information System (INIS)

    Moura, Joao Augusto

    2009-01-01

    Today cancer is the second cause of death by disease in several countries, including Brazil. Excluding skin cancer, prostate cancer is the most incident in the population. Prostate tumor can be treated by several ways, including brachytherapy, which consists in introducing sealed radioactive sources (Iodine - 125 seeds) inside the tumor. The target region of treatment receives a high radiation dose, but healthy neighbor tissues receive a significantly reduced radiation dose. The seed is made of a welding sealed titanium capsule, 0.8 mm external diameter and 4.5 mm length, enclosing a 0.5 mm diameter silver wire with Iodine-125 adsorbed. After welded, the seeds have to be submitted to a leak test to prevent any radioactive material release. The aims of this work were: (a) the study of the different leakage test methods applied to radioactive seeds and recommended by the ISO 997820, (b) the choice of the appropriate method and (c) the flowchart determination of the process to be used during the seeds production. The essays exceeded the standards with the use of ultra-sound during immersion and the corresponding benefits to leakage detection. Best results were obtained with the immersion in distilled water at 20 degree C for 24 hours and distilled water at 70 degree C for 30 minutes. These methods will be used during seed production. The process flowchart has all the phases of the leakage tests according to the sequence determined in the experiments. (author)

  3. Optimization of methodology for the assessing of bioaccumulation factors of periphyton metals applying the X-ray fluorescence technique

    International Nuclear Information System (INIS)

    Merced Ch, D.

    2015-01-01

    The Lerma River is one of the most polluted at Mexico, has a high pollutant load and low biodiversity, this aquatic plants and species zoo perifiton presented adaptations to environmental conditions that exist due to dumping of wastewater are developed. In this paper bioaccumulation factors (BAF) of Cr, Mn, Fe, Ni, Cu, Zn and Pb metal on Hydrocotyle ranunculoides zoo perifiton associated with the upper reaches of the Lerma River applying the technique of fluorescence X-rays were evaluated in the form of Total Reflection. The BAF were higher compared to the soluble fraction to the total fraction this because the metal in the soluble phase are in solution and are therefore more available to join aquatic organisms, moreover respect to the BAF sediment were ≤ 1.5 indicate that these organisms have little affinity for incorporating metals from the sediment. Considering the sum of the FBA of all metals in each agency notes that the leech was the biggest bio accumulated metals (42468) followed by the worm to (27958), the arthropod with (10757) and finally the snail with (8421). Overall for this study agencies according to the BAF reported to bio accumulate metals are the following behavior Fe > Zn > Cu > Cr > Ni > Mn > Pb. (Author)

  4. [Applying the Methodology and Practice of Microhistory: The Diary of a Confucian Doctor, Yi Mun-gǒn (1495-1567)].

    Science.gov (United States)

    Shin, Dongwon

    2015-08-01

    Since microhistory's approach to the past is based on an understanding of and a sympathy for the concrete details of human lives, its area of interests overlaps with the history of medicine and medical humanities, which examine illness and health. If we put a specific region and society in a specific period under a microscope and increase the magnifying power, we can understand the numerous network connections among the body, illness management, and medicine and how multilayered were the knowledge and power applied to them. And this approach of using microhistory to illuminate medical history can be more effective than any other historical approach. This article focuses on Yi Mun-gǒn's extensive volumes of Mukchaeilgi (Mukchae's diary) in approaching medical history from the perspective of microhistory. Simply defined, this work is a Confucian scholar-doctor's diary. Its author, Yi Mun-gǒn, played the role of a Confucian doctor, although not professionally, during his 23-year exile, after serving in a high governmental office on the senior grade of the third court rank. Thanks to this extensive and detailed diary, we can now get adetailed andthorough picture of his medical practice in the Sǒngju region, 270 kilometers southeast of Seoul, where he was exiled. This article aims to understand the state of medical practice in the Sǒngju region in the 16thcentury through the"zoom-in" method adopted by microhistory. In particular, I will focus on the following three aspects:1) Yi Mun-gǒn's motivation for and method of medical study, 2)the character of Yi Mun-gǒn'spatient treatment as hwarin (the act of life-saving), and 3) the plural existence of various illness management methods, including pyǒngjǒm (divination of illness), sutra-chanting, exorcism, and ch'oje (ritual toward Heaven). All three aspects are closely related to Confucianism. First, Yi Mun-gǒn decided to acquire professional-level medical knowledge in order to practice the Confucian virtue of filial

  5. Restorative outcomes for endodontically treated teeth in the Practitioners Engaged in Applied Research and Learning network.

    Science.gov (United States)

    Spielman, Howard; Schaffer, Scott B; Cohen, Mitchell G; Wu, Hongyu; Vena, Donald A; Collie, Damon; Curro, Frederick A; Thompson, Van P; Craig, Ronald G

    2012-07-01

    The authors aimed to determine the outcome of and factors associated with success and failure of restorations in endodontically treated teeth in patients in practices participating in the Practitioners Engaged in Applied Research and Learning (PEARL) Network. Practitioner-investigators (P-Is) invited the enrollment of all patients seeking care at participating practices who had undergone primary endodontic therapy and restoration in a permanent tooth three to five years earlier. P-Is classified endodontically reated teeth as restorative failures if the restoration was replaced, the restoration needed replacement or the tooth was cracked or fractured. P-Is from 64 practices enrolled in the study 1,298 eligible patients who had endodontically treated teeth that had been restored. The mean (standard deviation) time to follow-up was 3.9 (0.6) years. Of the 1,298 enrolled teeth, P-Is classified 181 (13.9 percent; 95 percent confidence interval [CI], 12.1-15.8 percent) as restorative failures: 44 (3.4 percent) due to cracks or fractures, 57 (4.4 percent) due to replacement of the original restoration for reasons other than fracture and 80 (6.2 percent) due to need for a new restoration. When analyzing the results by means of multivariate logistic regression, the authors found a greater risk of restorative failure to be associated with canines or incisors and premolars (P = .04), intracoronal restorations (P < .01), lack of preoperative proximal contacts (P < .01), presence of periodontal connective-tissue attachment loss (P < .01), younger age (P = .01), Hispanic/Latino ethnicity (P = .04) and endodontic therapy not having been performed by a specialist (P = .04). These results suggest that molars (as opposed to other types of teeth), full-coverage restorations, preoperative proximal contacts, good periodontal health, non-Hispanic/Latino ethnicity, endodontic therapy performed by a specialist and older patient age are associated with restorative success for

  6. Applying the PR and PP Methodology for a qualitative assessment of a misuse scenario in a notional Generation IV Example Sodium Fast Reactor. Assessing design variations

    Energy Technology Data Exchange (ETDEWEB)

    Cojazzi, G.G.M.; Renda, G. [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, TP 210, Via E. Fermi 2749, I-21027, Ispra - Va (Italy); Hassberger, J. [Lawrence Livermore National Laboratory (United States)

    2009-06-15

    The Generation IV International Forum (GIF) Proliferation Resistance and Physical Protection (PR and PP) Working Group has developed a methodology for the PR and PP evaluation of advanced nuclear energy systems. The methodology is organised as a progressive approach applying alternative methods at different levels of thoroughness as more design information becomes available and research improves the depth of technical knowledge. The GIF Proliferation Resistance and Physical Protection (PR and PP) Working Group developed a notional sodium cooled fast neutron nuclear reactor, named the Example Sodium Fast Reactor (ESFR), for use in developing and testing the methodology. The ESFR is a hypothetical nuclear energy system consisting of four sodium-cooled fast reactors of medium size, co-located with an on-site dry fuel storage facility and a Fuel Cycle Facility with pyrochemical processing of the spent fuel and re-fabrication of new ESFR fuel elements. The baseline design is an actinide burner, with LWR spent fuel elements as feed material processed on the site. In the years 2007 and 2008 the GIF PR and PP Working Group performed a case study designed to both test the methodology and demonstrate how it can provide useful feedback to designers even during pre-conceptual design. The Study analysed the response of the entire ESFR system to different proliferation and theft strategies. Three proliferation threats were considered: Concealed diversion, Concealed Misuse and Abrogation. An overt theft threat was also studied. One of the objectives of the case study is to confirm the capability of the methodology to capture PR and PP differences among varied design configurations. To this aim Design Variations (DV) have been also defined corresponding respectively to a) a small variation of the baseline design (DV0), b) a deep burner configuration (DV1), c) a self sufficient core (DV2), and c) a breeder configuration (DV3). This paper builds on the approach followed for the

  7. Implications of applying methodological shortcuts to expedite systematic reviews: three case studies using systematic reviews from agri-food public health.

    Science.gov (United States)

    Pham, Mai T; Waddell, Lisa; Rajić, Andrijana; Sargeant, Jan M; Papadopoulos, Andrew; McEwen, Scott A

    2016-12-01

    The rapid review is an approach to synthesizing research evidence when a shorter timeframe is required. The implications of what is lost in terms of rigour, increased bias and accuracy when conducting a rapid review have not yet been elucidated. We assessed the potential implications of methodological shortcuts on the outcomes of three completed systematic reviews addressing agri-food public health topics. For each review, shortcuts were applied individually to assess the impact on the number of relevant studies included and whether omitted studies affected the direction, magnitude or precision of summary estimates from meta-analyses. In most instances, the shortcuts resulted in at least one relevant study being omitted from the review. The omission of studies affected 39 of 143 possible meta-analyses, of which 14 were no longer possible because of insufficient studies (studies generally resulted in less precise pooled estimates (i.e. wider confidence intervals) that did not differ in direction from the original estimate. The three case studies demonstrated the risk of missing relevant literature and its impact on summary estimates when methodological shortcuts are applied in rapid reviews. © 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.

  8. Forecasting of passenger traffic in Moscow metro applying artificial neural networks

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Natsional'nyj Issledovatel'skij Yadernyj Univ. MIFI, Moscow; FKU Rostransmodernizatsiya, Moscow

    2016-01-01

    Methods for the forecasting of passenger traffic in Moscow metro have been developed using artificial neural networks. To this end, the factors primarily determining passenger traffic in the subway have been analyzed and selected [ru

  9. A Belief Network Decision Support Method Applied to Aerospace Surveillance and Battle Management Projects

    National Research Council Canada - National Science Library

    Staker, R

    2003-01-01

    This report demonstrates the application of a Bayesian Belief Network decision support method for Force Level Systems Engineering to a collection of projects related to Aerospace Surveillance and Battle Management...

  10. Bluetooth Low Power Modes Applied to the Data Transportation Network in Home Automation Systems

    Science.gov (United States)

    Etxaniz, Josu; Aranguren, Gerardo

    2017-01-01

    Even though home automation is a well-known research and development area, recent technological improvements in different areas such as context recognition, sensing, wireless communications or embedded systems have boosted wireless smart homes. This paper focuses on some of those areas related to home automation. The paper draws attention to wireless communications issues on embedded systems. Specifically, the paper discusses the multi-hop networking together with Bluetooth technology and latency, as a quality of service (QoS) metric. Bluetooth is a worldwide standard that provides low power multi-hop networking. It is a radio license free technology and establishes point-to-point and point-to-multipoint links, known as piconets, or multi-hop networks, known as scatternets. This way, many Bluetooth nodes can be interconnected to deploy ambient intelligent networks. This paper introduces the research on multi-hop latency done with park and sniff low power modes of Bluetooth over the test platform developed. Besides, an empirical model is obtained to calculate the latency of Bluetooth multi-hop communications over asynchronous links when links in scatternets are always in sniff or the park mode. Smart home devices and networks designers would take advantage of the models and the estimation of the delay they provide in communications along Bluetooth multi-hop networks. PMID:28468294

  11. Bluetooth Low Power Modes Applied to the Data Transportation Network in Home Automation Systems

    Directory of Open Access Journals (Sweden)

    Josu Etxaniz

    2017-04-01

    Full Text Available Even though home automation is a well-known research and development area, recent technological improvements in different areas such as context recognition, sensing, wireless communications or embedded systems have boosted wireless smart homes. This paper focuses on some of those areas related to home automation. The paper draws attention to wireless communications issues on embedded systems. Specifically, the paper discusses the multi-hop networking together with Bluetooth technology and latency, as a quality of service (QoS metric. Bluetooth is a worldwide standard that provides low power multi-hop networking. It is a radio license free technology and establishes point-to-point and point-to-multipoint links, known as piconets, or multi-hop networks, known as scatternets. This way, many Bluetooth nodes can be interconnected to deploy ambient intelligent networks. This paper introduces the research on multi-hop latency done with park and sniff low power modes of Bluetooth over the test platform developed. Besides, an empirical model is obtained to calculate the latency of Bluetooth multi-hop communications over asynchronous links when links in scatternets are always in sniff or the park mode. Smart home devices and networks designers would take advantage of the models and the estimation of the delay they provide in communications along Bluetooth multi-hop networks.

  12. Bluetooth Low Power Modes Applied to the Data Transportation Network in Home Automation Systems.

    Science.gov (United States)

    Etxaniz, Josu; Aranguren, Gerardo

    2017-04-30

    Even though home automation is a well-known research and development area, recent technological improvements in different areas such as context recognition, sensing, wireless communications or embedded systems have boosted wireless smart homes. This paper focuses on some of those areas related to home automation. The paper draws attention to wireless communications issues on embedded systems. Specifically, the paper discusses the multi-hop networking together with Bluetooth technology and latency, as a quality of service (QoS) metric. Bluetooth is a worldwide standard that provides low power multi-hop networking. It is a radio license free technology and establishes point-to-point and point-to-multipoint links, known as piconets, or multi-hop networks, known as scatternets. This way, many Bluetooth nodes can be interconnected to deploy ambient intelligent networks. This paper introduces the research on multi-hop latency done with park and sniff low power modes of Bluetooth over the test platform developed. Besides, an empirical model is obtained to calculate the latency of Bluetooth multi-hop communications over asynchronous links when links in scatternets are always in sniff or the park mode. Smart home devices and networks designers would take advantage of the models and the estimation of the delay they provide in communications along Bluetooth multi-hop networks.

  13. Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection.

    Science.gov (United States)

    Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne

    2005-04-15

    The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.

  14. A Neural Network Controller New Methodology for the ATR-42 Morphing Wing Actuation

    Directory of Open Access Journals (Sweden)

    Abdallah Ben MOSBAH

    2016-06-01

    Full Text Available A morphing wing model is used to improve aircraft performance. To obtain the desired airfoils, electrical actuators are used, which are installed inside of the wing to morph its upper surface in order to obtain its desired shape. In order to achieve this objective, a robust position controller is needed. In this research, a design and test validation of a controller based on neural networks is presented. This controller was composed by a position controller and a current controller to manage the current consumed by the electrical actuators to obtain its desired displacement. The model was tested and validated using simulation and experimental tests. The results obtained with the proposed controller were compared to the results given by the PID controller. Wind tunnel tests were conducted in the Price-Païdoussis Wind Tunnel at the LARCASE laboratory in order to calculate the pressure coefficient distribution on an ATR-42 morphing wing model for different flow conditions. The pressure coefficients obtained experimentally were compared with their numerical values given by XFoil software.

  15. Alignment of the UMLS semantic network with BioTop: methodology and assessment.

    Science.gov (United States)

    Schulz, Stefan; Beisswanger, Elena; van den Hoek, László; Bodenreider, Olivier; van Mulligen, Erik M

    2009-06-15

    For many years, the Unified Medical Language System (UMLS) semantic network (SN) has been used as an upper-level semantic framework for the categorization of terms from terminological resources in biomedicine. BioTop has recently been developed as an upper-level ontology for the biomedical domain. In contrast to the SN, it is founded upon strict ontological principles, using OWL DL as a formal representation language, which has become standard in the semantic Web. In order to make logic-based reasoning available for the resources annotated or categorized with the SN, a mapping ontology was developed aligning the SN with BioTop. The theoretical foundations and the practical realization of the alignment are being described, with a focus on the design decisions taken, the problems encountered and the adaptations of BioTop that became necessary. For evaluation purposes, UMLS concept pairs obtained from MEDLINE abstracts by a named entity recognition system were tested for possible semantic relationships. Furthermore, all semantic-type combinations that occur in the UMLS Metathesaurus were checked for satisfiability. The effort-intensive alignment process required major design changes and enhancements of BioTop and brought up several design errors that could be fixed. A comparison between a human curator and the ontology yielded only a low agreement. Ontology reasoning was also used to successfully identify 133 inconsistent semantic-type combinations. BioTop, the OWL DL representation of the UMLS SN, and the mapping ontology are available at http://www.purl.org/biotop/.

  16. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    Science.gov (United States)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  17. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    Science.gov (United States)

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  18. Managing Distributed Innovation Processes in Virtual Organizations by Applying the Collaborative Network Relationship Analysis

    Science.gov (United States)

    Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter

    Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.

  19. Applying 4-regular grid structures in large-scale access networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas P.; Patel, Ahmed

    2006-01-01

    4-Regular grid structures have been used in multiprocessor systems for decades due to a number of nice properties with regard to routing, protection, and restoration, together with a straightforward planar layout. These qualities are to an increasing extent demanded also in largescale access...... networks, but concerning protection and restoration these demands have been met only to a limited extent by the commonly used ring and tree structures. To deal with the fact that classical 4-regular grid structures are not directly applicable in such networks, this paper proposes a number of extensions...... concerning restoration, protection, scalability, embeddability, flexibility, and cost. The extensions are presented as a tool case, which can be used for implementing semi-automatic and in the longer term full automatic network planning tools....

  20. Discovering complex interrelationships between socioeconomic status and health in Europe: A case study applying Bayesian Networks.

    Science.gov (United States)

    Alvarez-Galvez, Javier

    2016-03-01

    Studies assume that socioeconomic status determines individuals' states of health, but how does health determine socioeconomic status? And how does this association vary depending on contextual differences? To answer this question, our study uses an additive Bayesian Networks model to explain the interrelationships between health and socioeconomic determinants using complex and messy data. This model has been used to find the most probable structure in a network to describe the interdependence of these factors in five European welfare state regimes. The advantage of this study is that it offers a specific picture to describe the complex interrelationship between socioeconomic determinants and health, producing a network that is controlled by socio-demographic factors such as gender and age. The present work provides a general framework to describe and understand the complex association between socioeconomic determinants and health. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Applying Fuzzy Logic and Data Mining Techniques in Wireless Sensor Network for Determination Residential Fire Confidence

    Directory of Open Access Journals (Sweden)

    Mirjana Maksimović

    2014-09-01

    Full Text Available The main goal of soft computing technologies (fuzzy logic, neural networks, fuzzy rule-based systems, data mining techniques… is to find and describe the structural patterns in the data in order to try to explain connections between data and on their basis create predictive or descriptive models. Integration of these technologies in sensor nodes seems to be a good idea because it can significantly lead to network performances improvements, above all to reduce the energy consumption and enhance the lifetime of the network. The purpose of this paper is to analyze different algorithms in the case of fire confidence determination in order to see which of the methods and parameter values work best for the given problem. Hence, an analysis between different classification algorithms in a case of nominal and numerical d

  2. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, J.; Gunter, D.; Tierney, B.; Allcock, B.; Bester, J.; Bresnahan, J.; Tuecke, S.

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. From their work developing a scalable distributed network cache, the authors have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). The authors discuss several hardware and software design techniques, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. The authors describe results from the Supercomputing 2000 conference

  3. Distance-Based Access Modifiers Applied to Safety in Home Networks

    DEFF Research Database (Denmark)

    Mortensen, Kjeld Høyer; Schougaard, Kari Rye; Schultz, Ulrik Pagh

    2004-01-01

    Home networks and the interconnection of home appliances is a classical theme in ubiquitous computing research. Security is a recurring concern, but there is a lack of awareness of safety: preventing the computerized house from harming the inhabitants, even in a worst-case scenario where...... be performed within a physical proximity that ensures safety. We use a declarative approach integrated with an IDL language to express location-based restrictions on operations. This model has been implemented in a middleware for home audio-video devices, using infrared communication and a local-area network...

  4. Teaching strategies applied to teaching computer networks in Engineering in Telecommunications and Electronics

    Directory of Open Access Journals (Sweden)

    Elio Manuel Castañeda-González

    2016-07-01

    Full Text Available Because of the large impact that today computer networks, their study in related fields such as Telecommunications Engineering and Electronics is presented to the student with great appeal. However, by digging in content, lacking a strong practical component, you can make this interest decreases considerably. This paper proposes the use of teaching strategies and analogies, media and interactive applications that enhance the teaching of discipline networks and encourage their study. It is part of an analysis of how the teaching of the discipline process is performed and then a description of each of these strategies is done with their respective contribution to student learning.

  5. Guidelines for developing and updating Bayesian belief networks applied to ecological modeling and conservation.

    Science.gov (United States)

    B.G. Marcot; J.D. Steventon; G.D. Sutherland; R.K. McCann

    2006-01-01

    We provide practical guidelines for developing, testing, and revising Bayesian belief networks (BBNs). Primary steps in this process include creating influence diagrams of the hypothesized "causal web" of key factors affecting a species or ecological outcome of interest; developing a first, alpha-level BBN model from the influence diagram; revising the model...

  6. A distributed multiagent system architecture for body area networks applied to healthcare monitoring.

    Science.gov (United States)

    Felisberto, Filipe; Laza, Rosalía; Fdez-Riverola, Florentino; Pereira, António

    2015-01-01

    In the last years the area of health monitoring has grown significantly, attracting the attention of both academia and commercial sectors. At the same time, the availability of new biomedical sensors and suitable network protocols has led to the appearance of a new generation of wireless sensor networks, the so-called wireless body area networks. Nowadays, these networks are routinely used for continuous monitoring of vital parameters, movement, and the surrounding environment of people, but the large volume of data generated in different locations represents a major obstacle for the appropriate design, development, and deployment of more elaborated intelligent systems. In this context, we present an open and distributed architecture based on a multiagent system for recognizing human movements, identifying human postures, and detecting harmful activities. The proposed system evolved from a single node for fall detection to a multisensor hardware solution capable of identifying unhampered falls and analyzing the users' movement. The experiments carried out contemplate two different scenarios and demonstrate the accuracy of our proposal as a real distributed movement monitoring and accident detection system. Moreover, we also characterize its performance, enabling future analyses and comparisons with similar approaches.

  7. Distributed Control using Decompositions applied to a network of houses with µCHP’s

    NARCIS (Netherlands)

    Larsen, Gunn; Scherpen, Jacquelien M.A.; van Foreest, Nicolaas; Meinsma, Gjerrit; Stigter, Han

    2010-01-01

    In this project a network of households which are both producers and consumers of electricity, is assumed. We investigate how pricing mechanisms can be used to control the electricity supply of micro Combined Heat Power (µCHP) systems to the electricity grid. Such systems produce at the same time

  8. Applying of neural networks in determination of replacement cycle of spare parts

    International Nuclear Information System (INIS)

    Saric, Tomislav; Majdandzic; Niko; Lujic, Roberto

    2003-01-01

    The article shows neural networks applicability to determine expected working time of equipment components before the damage. The results based on measure - simulated values of suggested model have been presented. Advantages of suggested model have been analyzed compared to traditional way of replacement of spare parts and components. Implementation possibility of suggested model in Management Information Maintenance System has been described. (author)

  9. Social Network Analysis Applied to a Historical Ethnographic Study Surrounding Home Birth

    Directory of Open Access Journals (Sweden)

    Elena Andina-Diaz

    2018-04-01

    Full Text Available Safety during birth has improved since hospital delivery became standard practice, but the process has also become increasingly medicalised. Hence, recent years have witnessed a growing interest in home births due to the advantages it offers to mothers and their newborn infants. The aims of the present study were to confirm the transition from a home birth model of care to a scenario in which deliveries began to occur almost exclusively in a hospital setting; to define the social networks surrounding home births; and to determine whether geography exerted any influence on the social networks surrounding home births. Adopting a qualitative approach, we recruited 19 women who had given birth at home in the mid 20th century in a rural area in Spain. We employed a social network analysis method. Our results revealed three essential aspects that remain relevant today: the importance of health professionals in home delivery care, the importance of the mother’s primary network, and the influence of the geographical location of the actors involved in childbirth. All of these factors must be taken into consideration when developing strategies for maternal health.

  10. Social Network Analysis Applied to a Historical Ethnographic Study Surrounding Home Birth.

    Science.gov (United States)

    Andina-Diaz, Elena; Ovalle-Perandones, Mª Antonia; Ramos-Vidal, Ignacio; Camacho-Morell, Francisca; Siles-Gonzalez, Jose; Marques-Sanchez, Pilar

    2018-04-24

    Safety during birth has improved since hospital delivery became standard practice, but the process has also become increasingly medicalised. Hence, recent years have witnessed a growing interest in home births due to the advantages it offers to mothers and their newborn infants. The aims of the present study were to confirm the transition from a home birth model of care to a scenario in which deliveries began to occur almost exclusively in a hospital setting; to define the social networks surrounding home births; and to determine whether geography exerted any influence on the social networks surrounding home births. Adopting a qualitative approach, we recruited 19 women who had given birth at home in the mid 20th century in a rural area in Spain. We employed a social network analysis method. Our results revealed three essential aspects that remain relevant today: the importance of health professionals in home delivery care, the importance of the mother’s primary network, and the influence of the geographical location of the actors involved in childbirth. All of these factors must be taken into consideration when developing strategies for maternal health.

  11. Energy saving techniques applied over a nation-wide mobile network

    DEFF Research Database (Denmark)

    Perez, Eva; Frank, Philipp; Micallef, Gilbert

    2014-01-01

    Traffic carried over wireless networks has grown significantly in recent years and actual forecasts show that this trend is expected to continue. However, the rapid mobile data explosion and the need for higher data rates comes at a cost of increased complexity and energy consumption of the mobil...

  12. APPLIED OF IMPRESSED CURRENT CATHODIC PROTECTION DESIGN FOR FUEL PIPELINE NETWORK AT NAVAL BASE

    Directory of Open Access Journals (Sweden)

    k. Susilo

    2017-06-01

    Full Text Available Indonesian Navy (TNI AL is the main component for Maritime Security and Defence. Because of that, TNI AL needs Indonesian Warship (KRI to covered Maritime area. The main requirement from KRI is fulfilled by demand. To pock of fuel demand from KRI at Naval Base, it needs a new pipeline of fuel distribution network system. The pipeline network system used for maximum lifetime must be protected from corrosion. Basically, there are five methods of corrosion control such as change to a more suitable material, modification to the environment, use of protective coating, design modification to the system or component, and the application of cathodic or anodic protection. Cathodic protection for pipeline available in two kinds, namely Sacrifice Anode and Impressed Current Cathodic Protection (ICCP. This paper makes analysis from design of Impressed Current Cathodic Protection and total current requirement in the method. This paper showed both experimental from speciment test and theoritical calculation. The result showed that design of Impressed Current Cathodic Protection on fuel distribution pipeline network system requires voltage 33,759 V(DC, protection current 6,6035 A(DC by theoritical calculation and 6,544 A(DC from pipeline specimen test, with 0,25 mpy for corrosion rate. Transformer Rectifier design needs requirements 45 V with 10 A for current. This research result can be made as literature and standardization for Indonesian Navy in designing the Impressed Current Cathodic Protection for fuel distribution pipeline network system.

  13. Tourism Methodologies - New Perspectives, Practices and Procedures

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in differen...... codings and analysis, and tapping into the global network of social media.......This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...

  14. Fault detection Based Bayesian network and MOEA/D applied to Sensorless Drive Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhou Qing

    2017-01-01

    Full Text Available Sensorless Drive Diagnosis can be used to assess the process data without the need for additional cost-intensive sensor technology, and you can understand the synchronous motor and connecting parts of the damaged state. Considering the number of features involved in the process data, it is necessary to perform feature selection and reduce the data dimension in the process of fault detection. In this paper, the MOEA / D algorithm based on multi-objective optimization is used to obtain the weight vector of all the features in the original data set. It is more suitable to classify or make decisions based on these features. In order to ensure the fastness and convenience sensorless drive diagnosis, in this paper, the classic Bayesian network learning algorithm-K2 algorithm is used to study the network structure of each feature in sensorless drive, which makes the fault detection and elimination process more targeted.

  15. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, Jason; Gunter, Dan; Tierney, Brian; Allcock, Bill; Bester, Joe; Bresnahan, John; Tuecke, Steve

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference

  16. The spatial spread of schistosomiasis: A multidimensional network model applied to Saint-Louis region, Senegal

    Science.gov (United States)

    Ciddio, Manuela; Mari, Lorenzo; Sokolow, Susanne H.; De Leo, Giulio A.; Casagrandi, Renato; Gatto, Marino

    2017-10-01

    Schistosomiasis is a parasitic, water-related disease that is prevalent in tropical and subtropical areas of the world, causing severe and chronic consequences especially among children. Here we study the spatial spread of this disease within a network of connected villages in the endemic region of the Lower Basin of the Senegal River, in Senegal. The analysis is performed by means of a spatially explicit metapopulation model that couples local-scale eco-epidemiological dynamics with spatial mechanisms related to human mobility (estimated from anonymized mobile phone records), snail dispersal and hydrological transport of schistosome larvae along the main water bodies of the region. Results show that the model produces epidemiological patterns consistent with field observations, and point out the key role of spatial connectivity on the spread of the disease. These findings underline the importance of considering different transport pathways in order to elaborate disease control strategies that can be effective within a network of connected populations.

  17. Willingness to purchase Genetically Modified food: an analysis applying artificial Neural Networks

    OpenAIRE

    Salazar-Ordóñez, M.; Rodríguez-Entrena, M.; Becerra-Alonso, D.

    2014-01-01

    Findings about consumer decision-making process regarding GM food purchase remain mixed and are inconclusive. This paper offers a model which classifies willingness to purchase GM food, using data from 399 surveys in Southern Spain. Willingness to purchase has been measured using three dichotomous questions and classification, based on attitudinal, cognitive and socio-demographic factors, has been made by an artificial neural network model. The results show 74% accuracy to forecast the willin...

  18. Modulation of cortical-subcortical networks in Parkinson’s disease by applied field effects

    OpenAIRE

    Hess, Christopher W.

    2013-01-01

    Studies suggest that endogenous field effects may play a role in neuronal oscillations and communication. Non-invasive transcranial electrical stimulation with low-intensity currents can also have direct effects on the underlying cortex as well as distant network effects. While Parkinson’s disease (PD) is amenable to invasive neuromodulation in the basal ganglia by deep brain stimulation (DBS), techniques of non-invasive neuromodulation like transcranial direct current stimulation (tDCS) and ...

  19. ANALYTIC NETWORK PROCESS AND BALANCED SCORECARD APPLIED TO THE PERFORMANCE EVALUATION OF PUBLIC HEALTH SYSTEMS

    Directory of Open Access Journals (Sweden)

    Marco Aurélio Reis dos Santos

    2015-08-01

    Full Text Available The performance of public health systems is an issue of great concern. After all, to assure people's quality of life, public health systems need different kinds of resources. Balanced Scorecard provides a multi-dimensional evaluation framework. This paper presents the application of the Analytic Network Process and Balanced Scorecard in the performance evaluation of a public health system in a typical medium-sized Southeastern town in Brazil.

  20. Regional Understanding and Unity of Effort: Applying the Global SOF Network in Future Operating Environments Communications

    Science.gov (United States)

    2016-12-07

    disrupt terrorist networks COL Christopher Varhola, USAR has a Ph.D. in Cultural Anthropology and is a Joint Special Operations University Senior Fellow...conflict, where building relations and empowering regional states and organizations are logical remedies and are rightly a key ele- ment of U.S. diplomatic...grow- ing regional powers and organizations .6 As a result, U.S. freedom of action is reduced and requires coordination and permission from partner

  1. A modified eco-efficiency framework and methodology for advancing the state of practice of sustainability analysis as applied to green infrastructure.

    Science.gov (United States)

    Ghimire, Santosh R; Johnston, John M

    2017-09-01

    We propose a modified eco-efficiency (EE) framework and novel sustainability analysis methodology for green infrastructure (GI) practices used in water resource management. Green infrastructure practices such as rainwater harvesting (RWH), rain gardens, porous pavements, and green roofs are emerging as viable strategies for climate change adaptation. The modified framework includes 4 economic, 11 environmental, and 3 social indicators. Using 6 indicators from the framework, at least 1 from each dimension of sustainability, we demonstrate the methodology to analyze RWH designs. We use life cycle assessment and life cycle cost assessment to calculate the sustainability indicators of 20 design configurations as Decision Management Objectives (DMOs). Five DMOs emerged as relatively more sustainable along the EE analysis Tradeoff Line, and we used Data Envelopment Analysis (DEA), a widely applied statistical approach, to quantify the modified EE measures as DMO sustainability scores. We also addressed the subjectivity and sensitivity analysis requirements of sustainability analysis, and we evaluated the performance of 10 weighting schemes that included classical DEA, equal weights, National Institute of Standards and Technology's stakeholder panel, Eco-Indicator 99, Sustainable Society Foundation's Sustainable Society Index, and 5 derived schemes. We improved upon classical DEA by applying the weighting schemes to identify sustainability scores that ranged from 0.18 to 1.0, avoiding the nonuniqueness problem and revealing the least to most sustainable DMOs. Our methodology provides a more comprehensive view of water resource management and is generally applicable to GI and industrial, environmental, and engineered systems to explore the sustainability space of alternative design configurations. Integr Environ Assess Manag 2017;13:821-831. Published 2017. This article is a US Government work and is in the public domain in the USA. Integrated Environmental Assessment and

  2. Characterization of Genes for Beef Marbling Based on Applying Gene Coexpression Network

    Directory of Open Access Journals (Sweden)

    Dajeong Lim

    2014-01-01

    Full Text Available Marbling is an important trait in characterization beef quality and a major factor for determining the price of beef in the Korean beef market. In particular, marbling is a complex trait and needs a system-level approach for identifying candidate genes related to the trait. To find the candidate gene associated with marbling, we used a weighted gene coexpression network analysis from the expression value of bovine genes. Hub genes were identified; they were topologically centered with large degree and BC values in the global network. We performed gene expression analysis to detect candidate genes in M. longissimus with divergent marbling phenotype (marbling scores 2 to 7 using qRT-PCR. The results demonstrate that transmembrane protein 60 (TMEM60 and dihydropyrimidine dehydrogenase (DPYD are associated with increasing marbling fat. We suggest that the network-based approach in livestock may be an important method for analyzing the complex effects of candidate genes associated with complex traits like marbling or tenderness.

  3. Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels

    Directory of Open Access Journals (Sweden)

    Petkov T.

    2016-03-01

    Full Text Available A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC and Gas Chromatography - mass spectrometry (GC-MS were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean, palm, peanut, “unknown” were used as objects. The analysis of biodiesels showed the content of five major FAME (C16:0, C18:0, C18:1, C18:2, C18:3 and those components were used like inputs in the model. After training with 6 samples, for which the origin was known, ANN was verified and tested with ninety “unknown” samples. The present research demonstrated the successful application of neural network for recognition of biodiesels according to their feedstock which give information upon their properties and handling.

  4. Recon2Neo4j: applying graph database technologies for managing comprehensive genome-scale networks.

    Science.gov (United States)

    Balaur, Irina; Mazein, Alexander; Saqi, Mansoor; Lysenko, Artem; Rawlings, Christopher J; Auffray, Charles

    2017-04-01

    The goal of this work is to offer a computational framework for exploring data from the Recon2 human metabolic reconstruction model. Advanced user access features have been developed using the Neo4j graph database technology and this paper describes key features such as efficient management of the network data, examples of the network querying for addressing particular tasks, and how query results are converted back to the Systems Biology Markup Language (SBML) standard format. The Neo4j-based metabolic framework facilitates exploration of highly connected and comprehensive human metabolic data and identification of metabolic subnetworks of interest. A Java-based parser component has been developed to convert query results (available in the JSON format) into SBML and SIF formats in order to facilitate further results exploration, enhancement or network sharing. The Neo4j-based metabolic framework is freely available from: https://diseaseknowledgebase.etriks.org/metabolic/browser/ . The java code files developed for this work are available from the following url: https://github.com/ibalaur/MetabolicFramework . ibalaur@eisbm.org. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels

    Science.gov (United States)

    Petkov, T.; Mustafa, Z.; Sotirov, S.; Milina, R.; Moskovkina, M.

    2016-03-01

    A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC) and Gas Chromatography - mass spectrometry (GC-MS) were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters) profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean, palm, peanut, "unknown" were used as objects. The analysis of biodiesels showed the content of five major FAME (C16:0, C18:0, C18:1, C18:2, C18:3) and those components were used like inputs in the model. After training with 6 samples, for which the origin was known, ANN was verified and tested with ninety "unknown" samples. The present research demonstrated the successful application of neural network for recognition of biodiesels according to their feedstock which give information upon their properties and handling.

  6. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  7. A modular neural network scheme applied to fault diagnosis in electric power systems.

    Science.gov (United States)

    Flores, Agustín; Quiles, Eduardo; García, Emilio; Morant, Francisco; Correcher, Antonio

    2014-01-01

    This work proposes a new method for fault diagnosis in electric power systems based on neural modules. With this method the diagnosis is performed by assigning a neural module for each type of component comprising the electric power system, whether it is a transmission line, bus or transformer. The neural modules for buses and transformers comprise two diagnostic levels which take into consideration the logic states of switches and relays, both internal and back-up, with the exception of the neural module for transmission lines which also has a third diagnostic level which takes into account the oscillograms of fault voltages and currents as well as the frequency spectrums of these oscillograms, in order to verify if the transmission line had in fact been subjected to a fault. One important advantage of the diagnostic system proposed is that its implementation does not require the use of a network configurator for the system; it does not depend on the size of the power network nor does it require retraining of the neural modules if the power network increases in size, making its application possible to only one component, a specific area, or the whole context of the power system.

  8. A Modular Neural Network Scheme Applied to Fault Diagnosis in Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Agustín Flores

    2014-01-01

    Full Text Available This work proposes a new method for fault diagnosis in electric power systems based on neural modules. With this method the diagnosis is performed by assigning a neural module for each type of component comprising the electric power system, whether it is a transmission line, bus or transformer. The neural modules for buses and transformers comprise two diagnostic levels which take into consideration the logic states of switches and relays, both internal and back-up, with the exception of the neural module for transmission lines which also has a third diagnostic level which takes into account the oscillograms of fault voltages and currents as well as the frequency spectrums of these oscillograms, in order to verify if the transmission line had in fact been subjected to a fault. One important advantage of the diagnostic system proposed is that its implementation does not require the use of a network configurator for the system; it does not depend on the size of the power network nor does it require retraining of the neural modules if the power network increases in size, making its application possible to only one component, a specific area, or the whole context of the power system.

  9. Artificial neural network applied to ONB in vertical narrow annulus experiment

    International Nuclear Information System (INIS)

    Yun Guo; Guanghui Su; Dounan Jia; Jiaqiang Wang

    2005-01-01

    Full text of publication follows: It is very important to study the onset of nucleate boiling (ONB) in narrow channel. Engineering applications of the narrow channel are used more and more widely. The narrow channel is used in microelectronics. Narrow annular channel is also adopted to design the new type of heat exchanger. The ONB is usually regarded as the point of demarcation between the single-phase flow and two phase flow. So it is significant to study the onset of nucleate boiling in the judgment of the flow pattern and engineering design. Although the researches showed that the ONB in narrow space channel were different from that in common pipe, most of them did not study the bilateral heated effect on the ONB. The ONB was investigated for water flowing in the annular channel which gap is 1.2 mm at the pressure range from 0.10 to 5.0 MPa. The effect of some parameters on the ONB, such as the mass flux, pressure, inlet subcooled temperature, bilateral heating was analyzed. But the experiment has not been carried in great wide range of the pressure and flow flux. So the artificial neural networks were used to predict the ONB at wide range parameter. Recently artificial neural networks (ANNs) have been used widely in the field of reactor thermal-hydraulics because they can solve very complex multivariable and high non-linearity problems. The researchers can pay attention to the output results and be unaware of the inside characters of the networks. Most of them are used to predict the critical heat flux and some other accident problems. In fact some small-scale artificial neural networks can be used in thermal-hydraulic experiments easily. Based on the ONB experimental data, an artificial neural network (BP) is built to specify the ONB. According to a lot of experiments data another middle scale ANN is built to predict the ONB of narrow gap annular channels. The results are compared with other correlations. It was concluded that the power density of ONB in the

  10. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  11. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  12. Hybrid methodology for tuberculosis incidence time-series forecasting based on ARIMA and a NAR neural network.

    Science.gov (United States)

    Wang, K W; Deng, C; Li, J P; Zhang, Y Y; Li, X Y; Wu, M C

    2017-04-01

    Tuberculosis (TB) affects people globally and is being reconsidered as a serious public health problem in China. Reliable forecasting is useful for the prevention and control of TB. This study proposes a hybrid model combining autoregressive integrated moving average (ARIMA) with a nonlinear autoregressive (NAR) neural network for forecasting the incidence of TB from January 2007 to March 2016. Prediction performance was compared between the hybrid model and the ARIMA model. The best-fit hybrid model was combined with an ARIMA (3,1,0) × (0,1,1)12 and NAR neural network with four delays and 12 neurons in the hidden layer. The ARIMA-NAR hybrid model, which exhibited lower mean square error, mean absolute error, and mean absolute percentage error of 0·2209, 0·1373, and 0·0406, respectively, in the modelling performance, could produce more accurate forecasting of TB incidence compared to the ARIMA model. This study shows that developing and applying the ARIMA-NAR hybrid model is an effective method to fit the linear and nonlinear patterns of time-series data, and this model could be helpful in the prevention and control of TB.

  13. Applying Bayesian neural networks to event reconstruction in reactor neutrino experiments

    International Nuclear Information System (INIS)

    Xu Ye; Xu Weiwei; Meng Yixiong; Zhu Kaien; Xu Wei

    2008-01-01

    A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The electron samples from the Monte-Carlo simulation of the toy detector have been reconstructed by the method of Bayesian neural networks (BNNs) and the standard algorithm, a maximum likelihood method (MLD), respectively. The result of the event reconstruction using BNN has been compared with the one using MLD. Compared to MLD, the uncertainties of the electron vertex are not improved, but the energy resolutions are significantly improved using BNN. And the improvement is more obvious for the high energy electrons than the low energy ones

  14. The Technology of Suppressing Harmonics with Complex Neural Network is Applied to Microgrid

    Science.gov (United States)

    Zhang, Jing; Li, Zhan-Ying; Wang, Yan-ping; Li, Yang; Zong, Ke-yong

    2018-03-01

    According to the traits of harmonics in microgrid, a new CANN controller which combines BP and RBF neural network is proposed to control APF to detect and suppress harmonics. This controller has the function of current prediction. By simulation in Matlab / Simulink, this design can shorten the delay time nearly 0.02s (a power supply current cycle) in comparison with the traditional controller based on ip-iq method. The new controller also has higher compensation accuracy and better dynamic tracking traits, it can greatly suppress the harmonics and improve the power quality.

  15. Connectivity strategies for higher-order neural networks applied to pattern recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1990-01-01

    Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128 x 128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200,000 in a T/C case and about 2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies. The method found to work best is regional connectivity. The main advantages of this strategy are the following: (1) it considers features of various scales within the image and thus gets a better sample of what the image looks like; (2) it is invariant to shape-preserving geometric transformations, such as translation and rotation; (3) the connections are predetermined so that no extra computations are necessary during run time; and (4) it does not require any extra storage for recording which connections were formed.

  16. Modulation of Cortical-subcortical Networks in Parkinson’s Disease by Applied Field Effects

    Directory of Open Access Journals (Sweden)

    Christopher William Hess

    2013-09-01

    Full Text Available Studies suggest that endogenous field effects may play a role in neuronal oscillations and communication. Non-invasive transcranial electrical stimulation with low-intensity currents can also have direct effects on the underlying cortex as well as distant network effects. While Parkinson's disease (PD is amenable to invasive neuromodulation in the basal ganglia by deep brain stimulation, techniques of non-invasive neuromodulation like transcranial direct current stimulation (tDCS and transcranial alternating current stimulation (tACS are being investigated as possible therapies. tDCS and tACS have the potential to influence the abnormal cortical-subcortical network activity that occurs in PD through sub-threshold changes in cortical excitability or through entrainment or disruption of ongoing rhythmic cortical activity. This may allow for the targeting of specific features of the disease involving abnormal oscillatory activity, as well as the enhancement of potential cortical compensation for basal ganglia dysfunction and modulation of cortical plasticity in neurorehabilitation. However, little is currently known about how cortical stimulation will affect subcortical structures, the size of any effect, and the factors of stimulation that will influence these effects.

  17. Comparing trainers’ reports of clicker use to the use of clickers in applied research studies: methodological differences may explain conflicting results

    Directory of Open Access Journals (Sweden)

    Lynna C Feng

    2017-02-01

    Full Text Available Clicker training refers to an animal training technique, derived from laboratory-based studies of animal learning and behaviour, in which a reward-predicting signal is delivered immediately following performance of a desired behaviour, and is subsequently followed by a reward. While clicker training is popular amongst dog training practitioners, scientific evaluation in applied settings has been largely unsuccessful in replicating the benefits of reward-predicting signals seen in laboratory animal studies. Here we present an analysis of dog trainers’ advice and perceptions, conducted to better understand clicker training as it occurs in the dog training industry. Twenty-five sources (13 interviews with dog trainers, 5 websites, and 7 books were analysed using a deductive content analysis procedure. We found that, for many sources, “clicker training” referred not only to the technique, but also to a philosophy of training that emphasises positive reinforcement and the deliberate application of Learning Theory principles. Many sources reported that clicker training was fun, for both dog and handler, but that it could be frustrating for handlers to learn and sometimes cumbersome to juggle the extra equipment. In addition, while most sources recommended clicker training particularly when training new behaviours, many stated that it was no longer needed once the dog had learned the desired behaviour. When comparing industry recommendations to methods used in applied studies, different criteria were used for predictor signal conditioning. Inadequate conditioning of the predictor signal in empirical evaluations could partly explain the lack of learning benefits in applied studies. While future research is needed to verify the practitioner beliefs in a wider population, these results provide an in-depth description of what clicker training is, at least for the sources analysed, and a potential starting point for understanding methodological

  18. Applying a Cerebellar Model Articulation Controller Neural Network to a Photovoltaic Power Generation System Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Kuei-Hsiang Chao

    2013-01-01

    Full Text Available This study employed a cerebellar model articulation controller (CMAC neural network to conduct fault diagnoses on photovoltaic power generation systems. We composed a module array using 9 series and 2 parallel connections of SHARP NT-R5E3E 175 W photovoltaic modules. In addition, we used data that were outputted under various fault conditions as the training samples for the CMAC and used this model to conduct the module array fault diagnosis after completing the training. The results of the training process and simulations indicate that the method proposed in this study requires fewer number of training times compared to other methods. In addition to significantly increasing the accuracy rate of the fault diagnosis, this model features a short training duration because the training process only tunes the weights of the exited memory addresses. Therefore, the fault diagnosis is rapid, and the detection tolerance of the diagnosis system is enhanced.

  19. Neural networks applied to characterize blends containing refined and extra virgin olive oils.

    Science.gov (United States)

    Aroca-Santos, Regina; Cancilla, John C; Pariente, Enrique S; Torrecilla, José S

    2016-12-01

    The identification and quantification of binary blends of refined olive oil with four different extra virgin olive oil (EVOO) varietals (Picual, Cornicabra, Hojiblanca and Arbequina) was carried out with a simple method based on combining visible spectroscopy and non-linear artificial neural networks (ANNs). The data obtained from the spectroscopic analysis was treated and prepared to be used as independent variables for a multilayer perceptron (MLP) model. The model was able to perfectly classify the EVOO varietal (100% identification rate), whereas the error for the quantification of EVOO in the mixtures containing between 0% and 20% of refined olive oil, in terms of the mean prediction error (MPE), was 2.14%. These results turn visible spectroscopy and MLP models into a trustworthy, user-friendly, low-cost technique which can be implemented on-line to characterize olive oil mixtures containing refined olive oil and EVOOs. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. African American Social Networking Online: Applying a Digital Practice Approach to Understanding Digital Inequalities

    Directory of Open Access Journals (Sweden)

    Danielle Taana Smith

    2013-06-01

    Full Text Available This study develops a framework for systematic examination of information and communication technologies (ICTs usage differences within a group. This framework situates the digital divide and digital inequalities model within a broader conceptual model of digital practice, exemplified by how groups of people use ICTs. I use nationally representative data to examine online activities on social networking sites (SNS for African Americans and other ethnoracial groups. The data for this research comes from the Pew Internet and American Life’s “Spring Tracking Survey 2008”. The results from regression analyses support the digital practice framework which moves discussions of ICT usage beyond social and economic advantages or disadvantages, and addresses individual and group needs in using these technologies.

  1. Neural networks applied to determine the thermophysical properties of amino acid based ionic liquids.

    Science.gov (United States)

    Cancilla, John C; Perez, Ana; Wierzchoś, Kacper; Torrecilla, José S

    2016-03-14

    A series of models based on artificial neural networks (ANNs) have been designed to estimate the thermophysical properties of different amino acid-based ionic liquids (AAILs). Three different databases of AAILs were modeled using these algorithms with the goal set to estimate the density, viscosity, refractive index, ionic conductivity, and thermal expansion coefficient, and requiring only data regarding temperature and electronic polarizability of the chemicals. Additionally, a global model was designed combining all of the databases to determine the robustness of the method. In general, the results were successful, reaching mean prediction errors below 1% in many cases, as well as a statistically reliable and accurate global model. Attaining these successful models is a relevant fact as AAILs are novel biodegradable and biocompatible compounds which may soon make their way into the health sector forming a part of useful biomedical applications. Therefore, understanding the behavior and being able to estimate their thermophysical properties becomes crucial.

  2. Actor-Network Theory as a sociotechnical lens to explore the relationship of nurses and technology in practice: methodological considerations for nursing research.

    Science.gov (United States)

    Booth, Richard G; Andrusyszyn, Mary-Anne; Iwasiw, Carroll; Donelle, Lorie; Compeau, Deborah

    2016-06-01

    Actor-Network Theory is a research lens that has gained popularity in the nursing and health sciences domains. The perspective allows a researcher to describe the interaction of actors (both human and non-human) within networked sociomaterial contexts, including complex practice environments where nurses and health technology operate. This study will describe Actor-Network Theory and provide methodological considerations for researchers who are interested in using this sociotechnical lens within nursing and informatics-related research. Considerations related to technology conceptualization, levels of analysis, and sampling procedures in Actor-Network Theory based research are addressed. Finally, implications for future nursing research within complex environments are highlighted. © 2015 John Wiley & Sons Ltd.

  3. Methodological framework for economical and controllable design of heat exchanger networks: Steady-state analysis, dynamic simulation, and optimization

    International Nuclear Information System (INIS)

    Masoud, Ibrahim T.; Abdel-Jabbar, Nabil; Qasim, Muhammad; Chebbi, Rachid

    2016-01-01

    Highlights: • HEN total annualized cost, heat recovery, and controllability are considered in the framework. • Steady-state and dynamic simulations are performed. • Effect of bypass on total annualized cost and controllability is reported. • Optimum bypass fractions are found from closed and open-loop efforts. - Abstract: The problem of interaction between economic design and control system design of heat exchanger networks (HENs) is addressed in this work. The controllability issues are incorporated in the classical design of HENs. A new methodological framework is proposed to account for both economics and controllability of HENs. Two classical design methods are employed, namely, Pinch and superstructure designs. Controllability measures such as relative gain array (RGA) and singular value decomposition (SVD) are used. The proposed framework also presents a bypass placement strategy for optimal control of the designed network. A case study is used to test the applicability of the framework and to assess both economics and controllability. The results indicate that the superstructure design is more economical and controllable compared to the Pinch design. The controllability of the designed HEN is evaluated using Aspen-HYSYS closed-loop dynamic simulator. In addition, a sensitivity analysis is performed to study the effect of bypass fractions on the total annualized cost and controllability of the designed HEN. The analysis shows that increasing any bypass fraction increases the total annualized cost. However, the trend with the total annualized cost was not observed with respect to the control effort manifested by minimizing the integral of the squared errors (ISE) between the controlled stream temperatures and their targets (set-points). An optimal ISE point is found at a certain bypass fraction, which does not correspond to the minimal total annualized cost. The bypass fractions are validated via open-loop simulation and the additional cooling and

  4. A k-mer-based barcode DNA classification methodology based on spectral representation and a neural gas network.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; Rizzo, Riccardo; Urso, Alfonso

    2015-07-01

    In this paper, an alignment-free method for DNA barcode classification that is based on both a spectral representation and a neural gas network for unsupervised clustering is proposed. In the proposed methodology, distinctive words are identified from a spectral representation of DNA sequences. A taxonomic classification of the DNA sequence is then performed using the sequence signature, i.e., the smallest set of k-mers that can assign a DNA sequence to its proper taxonomic category. Experiments were then performed to compare our method with other supervised machine learning classification algorithms, such as support vector machine, random forest, ripper, naïve Bayes, ridor, and classification tree, which also consider short DNA sequence fragments of 200 and 300 base pairs (bp). The experimental tests were conducted over 10 real barcode datasets belonging to different animal species, which were provided by the on-line resource "Barcode of Life Database". The experimental results showed that our k-mer-based approach is directly comparable, in terms of accuracy, recall and precision metrics, with the other classifiers when considering full-length sequences. In addition, we demonstrate the robustness of our method when a classification is performed task with a set of short DNA sequences that were randomly extracted from the original data. For example, the proposed method can reach the accuracy of 64.8% at the species level with 200-bp fragments. Under the same conditions, the best other classifier (random forest) reaches the accuracy of 20.9%. Our results indicate that we obtained a clear improvement over the other classifiers for the study of short DNA barcode sequence fragments. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Artificial neural network analysis applied to simplifying bioeffect radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Purnomo, A.B.

    2001-01-01

    Full text: A bioeffect planning system has been developed by Wigg and Nicholls in the Departments of Clinical Radiobiology and Medical Physics, at the Royal Adelaide Hospital. The system has been developed to be an experimental tool by means of which bioeffect plans may be compared with conventional isodose plans in radiotherapy. Limitations of isodose planning, in many common clinical circumstances, have been apparent for sometime (Wigg and Wilson, Australasian Radiology, 1981, 25: 205-212). There are many reasons why bioeffect planning has been slow in developing. These include concerns about the clinical application of theoretical radiobiology models, the uncertainty of normal tissue and tumour parameter values, and the non-availability of suitable computer systems capable of performing bioeffect planning. These concerns are fully justified and isodose planning must remain, for the foreseeable future, the gold standard for clinical treatment. However, these concerns must be judged against the certainty that isodose planning, in which the only variable usually considered is the total dose, can be substantially misleading. Unfortunately, a typical Tumour Control Probability (TCP) equation for bioeffect planning is complex with 12 parameters. Consequently, the equation is difficult to implement in practice. Can the equation be simplified by ignoring the variability of some of the parameters? To test this possibility, we have attempted a neural network analysis of the problem. The capability of artificial neural network (ANN) analysis to solve classification problems was explored in which a weight space analysis was conducted. It led to the reduction of the number of parameters. The training data for the ANN analysis was generated using the above equation and practical data from many publications. The performance of the optimized ANN and the reduced-parameter ANN were tested using other treatment data. The optimized ANN results closely matched with those of the

  6. Experience of a Neural Network Imitator Applied to Diagnosis of Pre-pathological Conditions in Humans

    International Nuclear Information System (INIS)

    Belyashov, D.N.; Emelyanova, I.V.; Tichshenko, A.V.; Makarenko, N.G.; Sultanova, B.G.

    1998-01-01

    The Governmental Resolution of the RK 'Program of Medical Rehabilitation for People Influenced by Nuclear Tests at STS in 1949-1990' was published in March 1997. Implementation of the program requires first of all to create the effective methods of operative diagnostics of arid zones' population. To our mind, for this aims systems analysis with elements of neural network classification is more effective. We demonstrate such an approach using the example of the modem diagnostics system creating to detect the pre-pathological states among population by express analysis and personal particulars. The following considerations were used in the base of the training set: 1) any formalism must be based oneself upon wealth of phenomenology (experience, intuition, the presence of symptoms); 2) typical attributes of disease can be divided on 2 groups - subjective and objective. The common state of patient is characterised by the first group and it can have no intercommunication with disease. The second one is obtained by laboratory inspection and it is not connected with patient sensations. Each of the objective at-tributes can be the attribute of several illnesses at once. In this case both the subjective and objective features must be used together; 3) acceptability of any scheme can be substantiated only statistically. The question about justifiability and sufficiency of training set always demands separate discussion. Personal particulars are more available for creating training set. The set must be professionally oriented in order to reduce of selection effects. For our experiment the fully-connected neural network ( computer software, imitating the work of neural computer) 'Multi Neuron' was chosen. Feature space using for the net work was created from the 206 personal particulars. The research aimed to determine pre-pathological states of the urinary system organs among industrial, office and professional workers in the mining industry connected with phosphorus

  7. Artificial neural networks applied to DNBR calculation in digital core protection systems

    International Nuclear Information System (INIS)

    Lee, H. C.; Chang, S. H.

    2003-01-01

    The nuclear power plant has to be operated with sufficient margin from the specified DNBR limit for assuring its safety. The digital core protection system calculates on-line real-time DNBR by using a complex subchannel analysis program, and triggers a reliable reactor shutdown if the calculated DNBR approaches the specified limit. However, it takes relatively long calculation time even for a steady state condition, which may have an adverse effect on the operation flexibility. To overcome the drawback, a method using artificial neural networks is studied in this paper. Nonparametric training approach is utilized, which shows dramatic reduction of the training time, no tedious heuristic process for optimizing parameters, and no local minima problem during the training. The test results show that the predicted DNBR is within about ±2% deviation from the target DNBR for the fixed axial flux shape case. For the variable axial flux case including severely skewed shapes appeared during accidents, the deviation is about ±10∼15%. The suggested method could be the alternative that can calculate DNBR very quickly while increasing the plant availability

  8. Radial basis function networks applied to DNBR calculation in digital core protection systems

    International Nuclear Information System (INIS)

    Lee, Gyu-Cheon; Heung Chang, Soon

    2003-01-01

    The nuclear power plant has to be operated with sufficient margin from the specified DNBR limit for assuring its safety. The digital core protection system calculates on-line real-time DNBR by using a complex subchannel analysis program, and triggers a reliable reactor shutdown if the calculated DNBR approaches the specified limit. However, it takes a relatively long calculation time even for a steady state condition, which may have an adverse effect on the operation flexibility. To overcome the drawback, a new method using a radial basis function network is presented in this paper. Nonparametric training approach is utilized, which shows dramatic reduction of the training time, no tedious heuristic process for optimizing parameters, and no local minima problem during the training. The test results show that the predicted DNBR is within about ±2% deviation from the target DNBR for the fixed axial flux shape case. For the variable axial flux case including severely skewed shapes that appeared during accidents, the deviation is within about ±10%. The suggested method could be the alternative that can calculate DNBR very quickly while guaranteeing the plant safety

  9. A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus

    Science.gov (United States)

    Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.

    2017-11-01

    The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.

  10. Artificial neural networks applied to the prediction of spot prices in the market of electric energy

    International Nuclear Information System (INIS)

    Rodrigues, Alcantaro Lemes; Grimoni, Jose Aquiles Baesso

    2010-01-01

    The commercialization of electricity in Brazil as well as in the world has undergone several changes over the past 20 years. In order to achieve an economic balance between supply and demand of the good called electricity, stakeholders in this market follow both rules set by society (government, companies and consumers) and set by the laws of nature (hydrology). To deal with such complex issues, various studies have been conducted in the area of computational heuristics. This work aims to develop a software to forecast spot market prices in using artificial neural networks (ANN). ANNs are widely used in various applications especially in computational heuristics, where non-linear systems have computational challenges difficult to overcome because of the effect named 'curse of dimensionality'. This effect is due to the fact that the current computational power is not enough to handle problems with such a high combination of variables. The challenge of forecasting prices depends on factors such as: (a) foresee the demand evolution (electric load); (b) the forecast of supply (reservoirs, hydrology and climate), capacity factor; and (c) the balance of the economy (pricing, auctions, foreign markets influence, economic policy, government budget and government policy). These factors are considered be used in the forecasting model for spot market prices and the results of its effectiveness are tested and huge presented. (author)

  11. Classification by a neural network approach applied to non destructive testing

    International Nuclear Information System (INIS)

    Lefevre, M.; Preteux, F.; Lavayssiere, B.

    1995-01-01

    Radiography is used by EDF for pipe inspection in nuclear power plants in order to detect defects. The radiographs obtained are then digitized in a well-defined protocol. The aim of EDF consists of developing a non destructive testing system for recognizing defects. In this paper, we describe the recognition procedure of areas with defects. We first present the digitization protocol, specifies the poor quality of images under study and propose a procedure to enhance defects. We then examine the problem raised by the choice of good features for classification. After having proved that statistical or standard textural features such as homogeneity, entropy or contrast are not relevant, we develop a geometrical-statistical approach based on the cooperation between signal correlations study and regional extrema analysis. The principle consists of analysing and comparing for areas with defects and without any defect, the evolution of conditional probabilities matrices for increasing neighborhood sizes, the shape of variograms and the location of regional minima. We demonstrate that anisotropy and surface of series of 'comet tails' associated with probability matrices, variograms slope and statistical indices, regional extrema location, are features able to discriminate areas with defects from areas without any. The classification is then realized by a neural network, which structure, properties and learning mechanisms are detailed. Finally we discuss the results. (authors). 21 refs., 5 figs

  12. The sociocultural perspective applied to mobility and road safety: a case study through social networks

    Directory of Open Access Journals (Sweden)

    Pilar Parra Contreras

    2015-02-01

    Full Text Available This article explores the sociocultural paradigm as a theoretical framework to address mobility and road safety from the social sciences. This approach includes analysis of issues such as the uses and attributes of the car, cultural and social values associated with it, and the implications in processes in structuring and social exclusion. In order to this, we present a case study on alcohol and drugs and driving where we show the demographic, economic and occupational characteristics that mediate the different relation of the people with the car, but also their cultural characteristics, lifestyles and leisure. The research design combines data from a brief online survey with qualitative data such as tastes and preferences, from the social network Facebook. The analysis shows that there are groups of drivers who differ in their patterns of no dissociation in their consumption of alcohol / drugs and driving in terms of classical structural variables and lifestyles that are  reflected in their Facebook likes. The discussion and conclusions examine the need to analyze the social context in which road accident occurs and its usefulness in the design of awareness campaigns and intervention in road safety.

  13. A facile electrochemical intercalation and microwave assisted exfoliation methodology applied to screen-printed electrochemical-based sensing platforms to impart improved electroanalytical outputs.

    Science.gov (United States)

    Pierini, Gastón D; Foster, Christopher W; Rowley-Neale, Samuel J; Fernández, Héctor; Banks, Craig E

    2018-06-12

    Screen-printed electrodes (SPEs) are ubiquitous with the field of electrochemistry allowing researchers to translate sensors from the laboratory to the field. In this paper, we report an electrochemically driven intercalation process where an electrochemical reaction uses an electrolyte as a conductive medium as well as the intercalation source, which is followed by exfoliation and heating/drying via microwave irradiation, and applied to the working electrode of screen-printed electrodes/sensors (termed EDI-SPEs) for the first time. This novel methodology results in an increase of up to 85% of the sensor area (electrochemically active surface area, as evaluated using an outer-sphere redox probe). Upon further investigation, it is found that an increase in the electroactive area of the EDI-screen-printed based electrochemical sensing platforms is critically dependent upon the analyte and its associated electrochemical mechanism (i.e. adsorption vs. diffusion). Proof-of-concept for the electrochemical sensing of capsaicin, a measure of the hotness of chillies and chilli sauce, within both model aqueous solutions and a real sample (Tabasco sauce) is demonstrated in which the electroanalytical sensitivity (a plot of signal vs. concentration) is doubled when utilising EDI-SPEs over that of SPEs.

  14. A methodology based on dynamic artificial neural network for short-term forecasting of the power output of a PV generator

    International Nuclear Information System (INIS)

    Almonacid, F.; Pérez-Higueras, P.J.; Fernández, Eduardo F.; Hontoria, L.

    2014-01-01

    Highlights: • The output of the majority of renewables energies depends on the variability of the weather conditions. • The short-term forecast is going to be essential for effectively integrating solar energy sources. • A new method based on artificial neural network to predict the power output of a PV generator one hour ahead is proposed. • This new method is based on dynamic artificial neural network to predict global solar irradiance and the air temperature. • The methodology developed can be used to estimate the power output of a PV generator with a satisfactory margin of error. - Abstract: One of the problems of some renewables energies is that the output of these kinds of systems is non-dispatchable depending on variability of weather conditions that cannot be predicted and controlled. From this point of view, the short-term forecast is going to be essential for effectively integrating solar energy sources, being a very useful tool for the reliability and stability of the grid ensuring that an adequate supply is present. In this paper a new methodology for forecasting the output of a PV generator one hour ahead based on dynamic artificial neural network is presented. The results of this study show that the proposed methodology could be used to forecast the power output of PV systems one hour ahead with an acceptable degree of accuracy

  15. Auditing organizational communication: evaluating the methodological strengths and weaknesses of the critical incident technique, network analysis, and the communication satisfaction questionnaire

    NARCIS (Netherlands)

    Koning, K.H.

    2016-01-01

    This dissertation focuses on the methodology of communication audits. In the context of three Dutch high schools, we evaluated several audit instruments. The first study in this dissertation focuses on the question whether the rationale of the critical incident technique (CIT) still applies when it

  16. Methodologies for assessing the use-phase power consumption and greenhouse gas emissions of telecommunications network services.

    Science.gov (United States)

    Chan, Chien A; Gygax, André F; Wong, Elaine; Leckie, Christopher A; Nirmalathas, Ampalavanapillai; Kilper, Daniel C

    2013-01-02

    Internet traffic has grown rapidly in recent years and is expected to continue to expand significantly over the next decade. Consequently, the resulting greenhouse gas (GHG) emissions of telecommunications service-supporting infrastructures have become an important issue. In this study, we develop a set of models for assessing the use-phase power consumption and carbon dioxide emissions of telecom network services to help telecom providers gain a better understanding of the GHG emissions associated with the energy required for their networks and services. Due to the fact that measuring the power consumption and traffic in a telecom network is a challenging task, these models utilize different granularities of available network information. As the granularity of the network measurement information decreases, the corresponding models have the potential to produce larger estimation errors. Therefore, we examine the accuracy of these models under various network scenarios using two approaches: (i) a sensitivity analysis through simulations and (ii) a case study of a deployed network. Both approaches show that the accuracy of the models depends on the network size, the total amount of network service traffic (i.e., for the service under assessment), and the number of network nodes used to process the service.

  17. Improvement in post test accident analysis results prediction for the test no. 2 in PSB test facility by applying UMAE methodology

    International Nuclear Information System (INIS)

    Dubey, S.K.; Petruzzi, A.; Giannotti, W.; D'Auria, F.

    2006-01-01

    This paper mainly deals with the improvement in the post test accident analysis results prediction for the test no. 2, 'Total loss of feed water with failure of HPIS pumps and operator actions on primary and secondary circuit depressurization', carried-out on PSB integral test facility in May 2005. This is one the most complicated test conducted in PSB test facility. The prime objective of this test is to provide support for the verification of the accident management strategies for NPPs and also to verify the correctness of some safety systems operating only during accident. The objective of this analysis is to assess the capability to reproduce the phenomena occurring during the selected tests and to quantify the accuracy of the code calculation qualitatively and quantitatively for the best estimate code Relap5/mod3.3 by systematically applying all the procedures lead by Uncertainty Methodology based on Accuracy Extrapolation (UMAE), developed at University of Pisa. In order to achieve these objectives test facility nodalisation qualification for both 'steady state level' and 'on transient level' are demonstrated. For the 'steady state level' qualification compliance to acceptance criteria established in UMAE has been checked for geometrical details and thermal hydraulic parameters. The following steps have been performed for evaluation of qualitative qualification of 'on transient level': visual comparisons between experimental and calculated relevant parameters time trends; list of comparison between experimental and code calculation resulting time sequence of significant events; identification/verification of CSNI phenomena validation matrix; use of the Phenomenological Windows (PhW), identification of Key Phenomena and Relevant Thermal-hydraulic Aspects (RTA). A successful application of the qualitative process constitutes a prerequisite to the application of the quantitative analysis. For quantitative accuracy of code prediction Fast Fourier Transform Based

  18. Methodology for modeling the disinfection efficiency of fresh-cut leafy vegetables wash water applied on peracetic acid combined with lactic acid.

    Science.gov (United States)

    Van Haute, S; López-Gálvez, F; Gómez-López, V M; Eriksson, Markus; Devlieghere, F; Allende, Ana; Sampers, I

    2015-09-02

    A methodology to i) assess the feasibility of water disinfection in fresh-cut leafy greens wash water and ii) to compare the disinfectant efficiency of water disinfectants was defined and applied for a combination of peracetic acid (PAA) and lactic acid (LA) and comparison with free chlorine was made. Standardized process water, a watery suspension of iceberg lettuce, was used for the experiments. First, the combination of PAA+LA was evaluated for water recycling. In this case disinfectant was added to standardized process water inoculated with Escherichia coli (E. coli) O157 (6logCFU/mL). Regression models were constructed based on the batch inactivation data and validated in industrial process water obtained from fresh-cut leafy green processing plants. The UV254(F) was the best indicator for PAA decay and as such for the E. coli O157 inactivation with PAA+LA. The disinfection efficiency of PAA+LA increased with decreasing pH. Furthermore, PAA+LA efficacy was assessed as a process water disinfectant to be used within the washing tank, using a dynamic washing process with continuous influx of E. coli O157 and organic matter in the washing tank. The process water contamination in the dynamic process was adequately estimated by the developed model that assumed that knowledge of the disinfectant residual was sufficient to estimate the microbial contamination, regardless the physicochemical load. Based on the obtained results, PAA+LA seems to be better suited than chlorine for disinfecting process wash water with a high organic load but a higher disinfectant residual is necessary due to the slower E. coli O157 inactivation kinetics when compared to chlorine. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Methodological Proposal for Identification and Evaluation of Environmental Aspects and Impacts of IPEN Nuclear Facilities: A Case Study Applied to the Nuclear Fuel Center

    International Nuclear Information System (INIS)

    Mattos, Luis A. Terribile de; Filho, Tufic Madi; Meldonian, Nelson Leon

    2013-06-01

    This work presents an application of Failure Mode Effect Analysis (FMEA) to the process of identification of environmental aspects and impacts as a part of implementation and maintenance of an Environmental Management System (EMS) in accordance with the ISO 14001 standard. Also, it can contribute, as a complement, to the evaluation and improvement of safety of the installation focused. The study was applied to the Nuclear Fuel Center (CCN) of Nuclear and Energy Research Institute (IPEN), situated at the Campus of University of Sao Paulo, Brazil. The CCN facility has the objective of promoting scientific research and of producing nuclear fuel elements for the IEA-R1 Research Reactor. To identify the environmental aspects of the facility activities, products, and services, a systematic data collection was carried out by means of personal interviews, documents, reports and operation data records consulting. Furthermore, the processes and their interactions, failure modes, besides their causes and effects to the environment, were identified. As a result of a careful evaluation of these causes it was possible to identify and to classify the major potential environmental impacts, in order to set up and put in practice an Environmental Control Plan for the installation under study. The results have demonstrated the validity of the FMEA application to nuclear facility processes, identifying environmental aspects and impacts, whose controls are critical to achieve compliance with the environmental requirements of the Integrated Management System of IPEN. It was demonstrated that the methodology used in this work is a powerful management tool for resolving issues related to the conformity with applicable regulatory and legal requirements of the Brazilian Nuclear Energy Commission (CNEN) and the Brazilian Institute of Environment (IBAMA). (authors)

  20. Evaluation of water resources monitoring networks: study applied to surface waters in the Macaé River Basin

    Directory of Open Access Journals (Sweden)

    Carolina Cloris Lopes Benassuly

    2012-04-01

    Full Text Available Knowledge of hydrological phenomena is required in water resources monitoring, in order to structure the water management, focusing on ensuring its multiple uses while allowing that resource´s control and conservation. The effectiveness of monitoring depends on adequate information systems design and proper operation conditions. Data acquisition, treatment and analysis are vital for establishing management strategies, thus monitoring systems and networks shall be conceived according to their main objectives, and be optimized in terms of location of data stations. The generated data shall also model hydrological behavior of the studied basin, so that data interpolation can be applied to the whole basin. The present work aimed to join concepts and methods that guide the structuring of hydrologic monitoring networks of surface waters. For evaluating historical series characteristics as well as work stations redundancy, the entropy method was used. The Macaé River Basin’s importance is related to the public and industrial uses of water in the region that is responsible for more than 80% of Brazilian oil and gas production, what justifies the relevance of the research made. This study concluded that despite of its relatively short extension, the Macaé River Basin should have higher monitoring network density, in order to provide more reliable management data. It also depicted the high relevancy of stations located in its upper course.