such networking systems are modelled in the process calculus LySa. On top of this programming language based formalism an analysis is developed, which relies on techniques from data and control ow analysis. These are techniques that can be fully automated, which make them an ideal basis for tools targeted at non......It has for a long time been a challenge to built secure networking systems. One way to counter this problem is to provide developers of software applications for networking systems with easy-to-use tools that can check security properties before the applications ever reach the marked. These tools...... will both help raise the general level of awareness of the problems and prevent the most basic flaws from occurring. This thesis contributes to the development of such tools. Networking systems typically try to attain secure communication by applying standard cryptographic techniques. In this thesis...
Rücker, Gerta; Schwarzer, Guido
In systematic reviews based on network meta-analysis, the network structure should be visualized. Network plots often have been drawn by hand using generic graphical software. A typical way of drawing networks, also implemented in statistical software for network meta-analysis, is a circular representation, often with many crossing lines. We use methods from graph theory in order to generate network plots in an automated way. We give a number of requirements for graph drawing and present an algorithm that fits prespecified ideal distances between the nodes representing the treatments. The method was implemented in the function netgraph of the R package netmeta and applied to a number of networks from the literature. We show that graph representations with a small number of crossing lines are often preferable to circular representations. PMID:26060934
Shuhrat Ehgamberdiev; Alexander Serebryanskiy; Antonio Jimenez; Li-Han Wang; Ming-Tsung Sun; Javier Fernandez Fernandez; Dean-Yi Chou
A global network of small automated telescopes, the Taiwan Automated Telescope (TAT) network, dedicated to photometric measurements of stellar pulsations, is under construction. Two telescopes have been installed in Teide Observatory, Tenerife, Spain and Maidanak Observatory, Uzbekistan. The third telescope will be installed at Mauna Loa Observatory, Hawaii, USA. Each system uses a 9-cm Maksutov-type telescope. The effective focal length is 225 cm, corresponding to an f-ratio of 25. The field...
We present an algorithm based on artificial neural networks able to determine optimized experimental conditions for Rutherford backscattering measurements of Ge-implanted Si. The algorithm can be implemented for any other element implanted into a lighter substrate. It is foreseeable that the method developed in this work can be applied to still many other systems. The algorithm presented is a push-button black box, and does not require any human intervention. It is thus suited for automated control of an experimental setup, given an interface to the relevant hardware. Once the experimental conditions are optimized, the algorithm analyzes the final data obtained, and determines the desired parameters. The method is thus also suited for automated analysis of the data. The algorithm presented can be easily extended to other ion beam analysis techniques. Finally, it is suggested how the artificial neural networks required for automated control and analysis of experiments could be automatically generated. This would be suited for automated generation of the required computer code. Thus could RBS be done without experimentalists, data analysts, or programmers, with only technicians to keep the machines running
Lin, Yih-Hwang; Wu, Hsien-Chang; Wu, Chung-Yung
The purpose of this study is to develop an automated system for condition classification of a reciprocating compressor. Various time-frequency analysis techniques will be examined for decomposition of the vibration signals. Because a time-frequency distribution is a 3D data map, data reduction is indispensable for subsequent analysis. The extraction of the system characteristics using three indices, namely the time index, frequency index, and amplitude index, will be presented and examined for their applicability. The probability neural network is applied for automated condition classification using a combination of the three indices. The study reveals that a proper choice of the index combination and the time-frequency band can provide excellent classification accuracy for the machinery conditions examined in this work.
Shahabeddini Parizi, Mohammad; Radziwon, Agnieszka
The implementation of appropriate automation concepts which increase productivity in Small and Medium Sized Enterprises (SMEs) requires a lot of effort, due to their limited resources. Therefore, it is strongly recommended for small firms to open up for the external sources of knowledge, which...... could be obtained through network interaction. Based on two extreme cases of SMEs representing low-tech industry and an in-depth analysis of their manufacturing facilities this paper presents how collaboration between firms embedded in a regional ecosystem could result in implementation of new...... other members of the same regional ecosystem. The findings highlight two main automation related areas where manufacturing SMEs could leverage on external sources on knowledge – these are assistance in defining automation problem as well as appropriate solution and provider selection. Consequently, this...
Full Text Available In this study, a Profibus network based industrial automation system has been designed and it has been controlled on the spinal network of three phase induction machine. Delay occurred on the network has been examined during the speed control via the network. It has been determined that the network delay that occurred changes depending on data traffic on the network. Delay that occurred has been regarded as it is in the acceptable limits of maximum network-induced delay in motion systems.
The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many...
Madsen, Kaj; Schjær-Jacobsen, Hans; Voldby, J
A new gradient algorithm for the solution of nonlinear minimax problems has been developed. The algorithm is well suited for automated minimax design of networks and it is very simple to use. It compares favorably with recent minimax and leastpth algorithms. General convergence problems related...... to minimax design of networks are discussed. Finally, minimax design of equalization networks for reflectiontype microwave amplifiers is carried out by means of the proposed algorithm....
can be operated either interactively or fully automatically. In the interactive mode, it can be controlled through the Internet. In the fully automatic mode, the telescope operates with preset parameters without any human care, including taking dark frames and flat frames. The network can also be used for studies that require continuous observations for selected objects.
Beksaç, M S; Eskiizmirliler, S; Cakar, A N; Erkmen, A M; Dağdeviren, A; Lundsteen, C
In this study, we introduce an expert system for intelligent chromosome recognition and classification based on artificial neural networks (ANN) and features obtained by automated image analysis techniques. A microscope equipped with a CCTV camera, integrated with an IBM-PC compatible computer environment including a frame grabber, is used for image data acquisition. Features of the chromosomes are obtained directly from the digital chromosome images. Two new algorithms for automated object detection and object skeletonizing constitute the basis of the feature extraction phase which constructs the components of the input vector to the ANN part of the system. This first version of our intelligent diagnostic system uses a trained unsupervised neural network structure and an original rule-based classification algorithm to find a karyotyped form of randomly distributed chromosomes over a complete metaphase. We investigate the effects of network parameters on the classification performance and discuss the adaptability and flexibility of the neural system in order to reach a structure giving an output including information about both structural and numerical abnormalities. Moreover, the classification performances of neural and rule-based system are compared for each class of chromosome. PMID:8705397
Húsek, Dušan; Frolov, A. A.; Polyakov, P.Y.; Řezanková, H.
Amman: Applied Science Private University, 2006 - (Issa, G.; El-Qawasmeh, E.; Raho, G.), s. 321-327 ISBN 9957-8592-0-X. [CSIT 2006. International Multiconference on Computer Science and Information Technology /4./. Amman (JO), 05.04.2006-07.04.2006] R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean factor analysis * neural networks * associative memory * clustering * web searching * semantic web * information retrieval * document indexing * document classification * document processing * data mining * machine learning Subject RIV: BB - Applied Statistics, Operational Research
Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...
Elleithy, Khaled; Iskander, Magued; Kapila, Vikram; Karim, Mohammad A; Mahmood, Ausif
"Technological Developments in Networking, Education and Automation" includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the following areas: Computer Networks: Access Technologies, Medium Access Control, Network architectures and Equipment, Optical Networks and Switching, Telecommunication Technology, and Ultra Wideband Communications. Engineering Education and Online Learning: including development of courses and systems for engineering, technical and liberal studies programs; online laboratories; intelligent
Holm, Jens Åge; Pedersen, Jens Myrup
to consistency and long-term characteristics. The developed method gives significant improvements on these parameters. The case study was conducted as a comparison between an existing network where the traffic was known and a proposed network designed by the developed method. It turned out that the proposed......This paper describes a case study conducted to evaluate the viability of a systematic, automated network planning method. The motivation for developing the network planning method was that many data networks are planned in an adhoc manner with no assurance of quality of the solution with respect...... network performed better than the existing network with regard to the performance measurements used which reflected how well the traffic was routed in the networks and the cost of establishing the networks. Challenges that need to be solved before the developed method can be used to design network...
de Feijter, R.
This thesis presents a framework for the control of automated guided vehicles (AGVs). The framework implements the transport system as a community of cooperating agents. Besides the architecture and elements of the framework a wide range of infrastructure scene templates is described. These scene templates, ranging from terminal infrastructure to freeways, can be used as building blocks to create a control system for an automated transport network.
Black, Jeffrey D.; Dietzel, Robert; Hartnett, David
A software application has been developed to aid law enforcement and government intelligence gathering organizations in the translation and analysis of foreign language documents with potential intelligence content. The Automated Document Analysis System (ADAS) provides the capability to search (data or text mine) documents in English and the most commonly encountered foreign languages, including Arabic. Hardcopy documents are scanned by a high-speed scanner and are optical character recognized (OCR). Documents obtained in an electronic format bypass the OCR and are copied directly to a working directory. For translation and analysis, the script and the language of the documents are first determined. If the document is not in English, the document is machine translated to English. The documents are searched for keywords and key features in either the native language or translated English. The user can quickly review the document to determine if it has any intelligence content and whether detailed, verbatim human translation is required. The documents and document content are cataloged for potential future analysis. The system allows non-linguists to evaluate foreign language documents and allows for the quick analysis of a large quantity of documents. All document processing can be performed manually or automatically on a single document or a batch of documents.
E. M. Farhadzade
Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving and advanced methods for its realization.
Johnston, Mark D.; Jet Propulsion Laboratory, California Institute of Technology; Tran, Daniel; Jet Propulsion Laboratory, California Institute of Technology; Arroyo, Belinda; Jet Propulsion Laboratory, California Institute of Technology; Sorensen, Sugi; Jet Propulsion Laboratory, California Institute of Technology; Tay, Peter; Jet Propulsion Laboratory, California Institute of Technology; Carruth, Butch; Innovative Productivity Solutions, Inc.; Coffman, Adam; Innovative Productivity Solutions, Inc.; Wallace, Mike; Innovative Productivity Solutions, Inc.
This article describes the DSN scheduling wngine (DSE) component of a new scheduling system being deployed for NASA's deep space network. The DSE provides core automation functionality for scheduling the network, including the interpretation of scheduling requirements expressed by users, their elaboration into tracking passes, and the resolution of conflicts and constraint violations. The DSE incorporates both systematic search and repair-based algorithms, used for different phases and purpos...
In todays growing and ever changing world of computer networks, management systems need to have the abilitiesof intellectual reasoning, dynamic real time decision making,experience based self-adaptation and improvement. Furthermore,ever increasing size and complexity of computer networks requireautomation for their management systems. Automation minimizeshuman involvement which produces effective and time savingsolutions for proper and dynamic supervision of these largeand heterogeneous netwo...
Lurgi, M.; Robertson, D
BackgroundIn ecological networks, natural communities are studied from a complex systems perspective by representing interactions among species within them in the form of a graph, which is in turn analysed using mathematical tools. Topological features encountered in complex networks have been proved to provide the systems they represent with interesting attributes such as robustness and stability, which in ecological systems translates into the ability of communities to resist perturbations ...
Lurgi, Miguel; Robertson, David
Background In ecological networks, natural communities are studied from a complex systems perspective by representing interactions among species within them in the form of a graph, which is in turn analysed using mathematical tools. Topological features encountered in complex networks have been proved to provide the systems they represent with interesting attributes such as robustness and stability, which in ecological systems translates into the ability of communities to resist perturbations...
Ziegenfuss, Paul C.
The Joint Staff established the Tactical Network Analysis and Planning System Plus (TNAPS+) as the interim joint communications planning and management system. The Marines Command and Control Systems Course and the Army's Joint Task Force System Planning Course both utilize TNAPS+ to conduct tactical C41 network planning in their course requirements. This thesis is a Naval Postgraduate School C41 curriculum practical application of TNAPS+ in an expeditionary Joint Task Force environment, focu...
Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.
Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction
Maxime Frydman; Guifré Ruiz; Elisa Heymann; Eduardo César; Barton P. Miller
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security e...
An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described
Stegmann, Mikkel Bille; Davies, Rhodri H.
This report describes and evaluates the steps needed to perform modern model-based interpretation of the corpus callosum in MRI. The process is discussed from the initial landmark-free contours to full-fledged statistical models based on the Active Appearance Models framework. Topics treated...... include landmark placement, background modelling and multi-resolution analysis. Preliminary quantitative and qualitative validation in a cross-sectional study show that fully automated analysis and segmentation of the corpus callosum are feasible....
sophisticated energy consumers, it has been possible to improve the DR 'state of the art' with a manageable commitment of technical resources on both the utility and consumer side. Although numerous C & I DR applications of a DRAS infrastructure are still in either prototype or early production phases, these early attempts at automating DR have been notably successful for both utilities and C & I customers. Several factors have strongly contributed to this success and will be discussed below. These successes have motivated utilities and regulators to look closely at how DR programs can be expanded to encompass the remaining (roughly) half of the state's energy load - the light commercial and, in numerical terms, the more important residential customer market. This survey examines technical issues facing the implementation of automated DR in the residential environment. In particular, we will look at the potential role of home automation networks in implementing wide-scale DR systems that communicate directly to individual residences.
Miller, Jerry; Borne, Kirk; Thomas, Brian; Huang, Zhenping; Chi, Yuechen
We have tested and deployed Artificial Neural Network (ANN) data mining techniques to analyze remotely sensed multi-channel imaging data from MODIS, GOES, and AVHRR. The goal is to train the ANN to learn the signatures of wildfires in remotely sensed data in order to automate the detection process. We train the ANN using the set of human-detected wildfires in the U.S., which are provided by the Hazard Mapping System (HMS) wildfire detection group at NOAA/NESDIS. The ANN is trained to mimic the behavior of fire detection algorithms and the subjective decision- making by N O M HMS Fire Analysts. We use a local extremum search in order to isolate fire pixels, and then we extract a 7x7 pixel array around that location in 3 spectral channels. The corresponding 147 pixel values are used to populate a 147-dimensional input vector that is fed into the ANN. The ANN accuracy is tested and overfitting is avoided by using a subset of the training data that is set aside as a test data set. We have achieved an automated fire detection accuracy of 80-92%, depending on a variety of ANN parameters and for different instrument channels among the 3 satellites. We believe that this system can be deployed worldwide or for any region to detect wildfires automatically in satellite imagery of those regions. These detections can ultimately be used to provide thermal inputs to climate models.
Birgitta E. Ebert
Full Text Available Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.
Hayes, James C.; Doctor, Pam G.; Heimbigner, Tom R.; Hubbard, Charles W.; Kangas, Lars J.; Keller, Paul E.; McIntyre, Justin I.; Schrom, Brian T.; Suarez, Reynold
The Automated Radioxenon Analyzer/Sampler (ARSA) is a radioxenon gas collection and analysis system operating autonomously under computer control. The ARSA systems are deployed as part of an international network of sensors, with individual stations feeding radioxenon concentration data to a central data center. Because the ARSA instrument is complex and is often deployed in remote areas, it requires constant self-monitoring to verify that it is operating according to specifications. System performance monitoring is accomplished by over 200 internal sensors, with some values reported to the data center. Several sensors are designated as safety sensors that can automatically shut down the ARSA when unsafe conditions arise. In this case, the data center is advised of the shutdown and the cause, so that repairs may be initiated. The other sensors, called state of health (SOH) sensors, also provide valuable information on the functioning of the ARSA and are particularly useful for detecting impending malfunctions before they occur to avoid unscheduled shutdowns. Any of the sensor readings can be displayed by an ARSA Data Viewer, but interpretation of the data is difficult without specialized technical knowledge not routinely available at the data center. Therefore it would be advantageous to have sensor data automatically evaluated for the precursors of malfunctions and the results transmitted to the data center. Artificial Neural Networks (ANN) are a class of data analysis methods that have shown wide application to monitoring systems with large numbers of information inputs, such as the ARSA. In this work supervised and unsupervised ANN methods were applied to ARSA SOH data recording during normal operation of the instrument, and the ability of ANN methods to predict system state is presented.
Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.
Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688
Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies
Cost analysis of an automated network system and a manual system of cataloging and book processing indicates a 20 percent savings using automation. Per unit costs based on the average monthly automation rate are used for comparison. Higher manual system costs are attributed to staff costs. (RAA)
Kondo, Hiroshi; Zhao, Bin; Mino, Masako
Automated quantitative analysis for pneumoconiosis is presented. In this paper Japanese standard radiographs of pneumoconiosis are categorized by measuring the area density and the number density of small rounded opacities. And furthermore the classification of the size and shape of the opacities is made from the measuring of the equivalent radiuses of each opacity. The proposed method includes a bi- level unsharp masking filter with a 1D uniform impulse response in order to eliminate the undesired parts such as the images of blood vessels and ribs in the chest x-ray photo. The fuzzy contrast enhancement is also introduced in this method for easy and exact detection of small rounded opacities. Many simulation examples show that the proposed method is more reliable than the former method.
Williams, Galen S.; Raper, Kellie Curry
This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…
Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul
This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun
In this paper a method for automated planning of Fiber to the Home (FTTH) access networks is proposed. We introduced a systematic approach for planning access network infrastructure. The GIS data and a set of algorithms were employed to make the planning process more automatic. The method explains...
Jackson, K.A.; Hochberg, J.G.; Wilhelmy, S.K.; McClary, J.F.; Christoph, G.G.
This paper discusses management issues associated with the design and implementation of an automated audit analysis system that we use to detect security events. It gives the viewpoint of a team directly responsible for developing and managing such a system. We use Los Alamos National Laboratory`s Network Anomaly Detection and Intrusion Reporter (NADIR) as a case in point. We examine issues encountered at Los Alamos, detail our solutions to them, and where appropriate suggest general solutions. After providing an introduction to NADIR, we explore four general management issues: cost-benefit questions, privacy considerations, legal issues, and system integrity. Our experiences are of general interest both to security professionals and to anyone who may wish to implement a similar system. While NADIR investigates security events, the methods used and the management issues are potentially applicable to a broad range of complex systems. These include those used to audit credit card transactions, medical care payments, and procurement systems.
Prieto, Carlos Allende
The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some flaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10% of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1%. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overvie...
Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.
Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.
In a complex control process, instrument calibration is periodically performed to maintain the instruments within the calibration range, which assures proper control and minimizes down time. Instruments are usually calibrated under out-of-service conditions using manual calibration methods, which may cause incorrect calibration or equipment damage. Continuous in-service calibration monitoring of sensors and instruments will reduce unnecessary instrument calibrations, give operators more confidence in instrument measurements, increase plant efficiency or product quality, and minimize the possibility of equipment damage during unnecessary manual calibrations. In this dissertation, an artificial neural network (ANN)-based instrument calibration verification system is designed to achieve the on-line monitoring and verification goal for scheduling maintenance. Since an ANN is a data-driven model, it can learn the relationships among signals without prior knowledge of the physical model or process, which is usually difficult to establish for the complex non-linear systems. Furthermore, the ANNs provide a noise-reduced estimate of the signal measurement. More importantly, since a neural network learns the relationships among signals, it can give an unfaulted estimate of a faulty signal based on information provided by other unfaulted signals; that is, provide a correct estimate of a faulty signal. This ANN-based instrument verification system is capable of detecting small degradations or drifts occurring in instrumentation, and preclude false control actions or system damage caused by instrument degradation. In this dissertation, an automated scheme of neural network construction is developed. Previously, the neural network structure design required extensive knowledge of neural networks. An automated design methodology was developed so that a network structure can be created without expert interaction. This validation system was designed to monitor process sensors plant
Full Text Available The radio-based wireless data communication has made the realization of new technical solutions possible in many fields of the automation technology (AT. For about ten years, a constant disproportionate growth of wireless technologies can be observed in the automation technology. However, it shows that especially for the AT, conven-tional technologies of office automation are unsuitable and/or not manageable. The employment of mobile ser-vices in the industrial automation technology has the potential of significant cost and time savings. This leads to an increased productivity in various fields of the AT, for example in the factory and process automation or in production logistics. In this paper technologies and solu-tions for an automation-suited supply of mobile wireless services will be introduced under the criteria of real time suitability, IT-security and service orientation. Emphasis will be put on the investigation and develop-ment of wireless convergence layers for different radio technologies, on the central provision of support services for an easy-to-use, central, backup enabled management of combined wired / wireless networks and on the study on integrability in a Profinet real-time Ethernet network .
A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.
DONG Ming-xiao; MEI Xue-song; JIANG Ge-dong; ZHANG Gui-qing
This paper summarizes the modeling methods, open-loop control and closed-loop control techniques of various forms of cranes, worldwide, and discusses their feasibilities and limitations in engineering. Then the dynamic behaviors of cranes are analyzed. Finally, we propose applied modeling methods and feasible control techniques and demonstrate the feasibilities of crane automation.
The automated defect classification algorithm based on artificial neural network with multilayer backpropagation structure was utilized. The selected features of flaws were used as input data. In order to train the neural network it is necessary to prepare learning data which is representative database of defects. Database preparation requires the following steps: image acquisition and pre-processing, image enhancement, defect detection and feature extraction. The real digital radiographs of welded parts of a ship were used for this purpose.
Chady, T.; Caryk, M.; Piekarczyk, B.
The automated defect classification algorithm based on artificial neural network with multilayer backpropagation structure was utilized. The selected features of flaws were used as input data. In order to train the neural network it is necessary to prepare learning data which is representative database of defects. Database preparation requires the following steps: image acquisition and pre-processing, image enhancement, defect detection and feature extraction. The real digital radiographs of welded parts of a ship were used for this purpose.
The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLM when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation
H. Afsarmanesh; M. Sargolzaei; M. Shadi
This paper proposes a novel framework for automated software service composition that can significantly support and enhance collaboration among enterprises in service provision industry, such as in tourism insurance and e-commerce collaborative networks (CNs). Our proposed framework is founded on se
Full Text Available Smart grid, as an intelligent power generation, distribution, and control system, needs various communication systems to meet its requirements. The ability to communicate seamlessly across multiple networks and domains is an open issue which is yet to be adequately addressed in smart grid architectures. In this paper, we present a framework for end-to-end interoperability in home and building area networks within smart grids. 6LoWPAN and the compact application protocol are utilized to facilitate the use of IPv6 and Zigbee application profiles such as Zigbee smart energy for network and application layer interoperability, respectively. A differential service medium access control scheme enables end-to-end connectivity between 802.15.4 and IP networks while providing quality of service guarantees for Zigbee traffic over Wi-Fi. We also address several issues including interference mitigation, load scheduling, and security and propose solutions to them.
Peizhong Yi; Abiodun Iwayemi; Chi Zhou
Smart grid, as an intelligent power generation, distribution, and control system, needs various communication systems to meet its requirements. The ability to communicate seamlessly across multiple networks and domains is an open issue which is yet to be adequately addressed in smart grid architectures. In this paper, we present a framework for end-to-end interoperability in home and building area networks within smart grids. 6LoWPAN and the compact application protocol are utilized to facili...
Dixon, Lucas; Ristenpart, Thomas; Shrimpton, Thomas
Internet censors seek ways to identify and block internet access to information they deem objectionable. Increasingly, censors deploy advanced networking tools such as deep-packet inspection (DPI) to identify such connections. In response, activists and academic researchers have developed and deployed network traffic obfuscation mechanisms. These apply specialized cryptographic tools to attempt to hide from DPI the true nature and content of connections. In this survey, we give an overview of...
Stralen, Marijn van
In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction o
Enrique de la Hoz
Full Text Available Due to the low cost of CMOS IP-based cameras, wireless surveillance sensor networks have emerged as a new application of sensor networks able to monitor public or private areas or even country borders. Since these networks are bandwidth intensive and the radioelectric spectrum is limited, especially in unlicensed bands, it is mandatory to assign frequency channels in a smart manner. In this work, we propose the application of automated negotiation techniques for frequency assignment. Results show that these techniques are very suitable for the problem, being able to obtain the best solutions among the techniques with which we have compared them.
In order to evaluate the operation of high performance routers, CERN has developed the NetBench software to run benchmarking tests by injecting various traffic patterns and observing the network devices behaviour in real-time. The tool features a modular design with a Python based console used to inject traffic and collect the results in a database, and a web user
In the present report we propose the automation of least square fitting of Moessbauer spectra, the identification of the substance, its crystal structure and the access to the references with the help of a genetic algorith, Fuzzy logic, and the artificial neural network associated with a databank of Moessbauer parameters and references. This system could be useful for specialists and non-specialists, in industry as well as in research laboratories
Castejón, C.; Lara, O.; García-Prada, J. C.
Any industry needs an efficient predictive plan in order to optimize the management of resources and improve the economy of the plant by reducing unnecessary costs and increasing the level of safety. A great percentage of breakdowns in productive processes are caused by bearings. They begin to deteriorate from early stages of their functional life, also called the incipient level. This manuscript develops an automated diagnosis of rolling bearings based on the analysis and classification of signature vibrations. The novelty of this work is the application of the methodology proposed for data collected from a quasi-real industrial machine, where rolling bearings support the radial and axial loads the bearings are designed for. Multiresolution analysis (MRA) is used in a first stage in order to extract the most interesting features from signals. Features will be used in a second stage as inputs of a supervised neural network (NN) for classification purposes. Experimental results carried out in a real system show the soundness of the method which detects four bearing conditions (normal, inner race fault, outer race fault and ball fault) in a very incipient stage.
This paper describes a new inverse analysis system using the hierarchical (multilayer) neural networks and the computational mechanics. The present inverse analysis basically consists of the following three subprocesses: by parametrically varying system parameters, their corresponding responses of the system are calculated through computational mechanics simulations, each of which is an ordinary direct analysis. Each data pair of system parameters system responses is called training pattern; the back-propagation neural network is iteratively trained using a number of training patterns. The system responses are given to the input units of the network, while the system parameters to be identified are given to its output units as teacher signal; some system responses measured are given to the well trained network, which immediately outputs appropriate system parameters even for untrained patterns. This is an inverse analysis. To demonstrate its practical performances, the present system is applied to identify locations and shapes of two adjacent dissimilar surface cracks hidden in a pipe with the electric potential drop method. The results clearly show that the present system is very efficient and accurate. (author). 7 refs., 10 figs
Full Text Available Since the network analysis model based on the system state exists the issues of network survivability safety, fault tolerance and dynamic ability are adapted to the environment changes, in this paper, network system model based on the finite state automation has reconfigurable quality. The model first puts forward the concept of reconfigurable network systems and reveals its robustness, evolution and the basic attributes of survivability. By establishing a hierarchical model of system state, the system robust behavior, evolution behavior and survival behavior are described. Secondly, network topology reconfigurable measurement as an example, puts forward the quantitative reconfigurable metrics. At last, the example verification. Experiments show that the proposed reconfigurable quantitative indicators of reconfigurable resistance model for [1.391, 1.140, 1.591] prove that the network is an efficient reconfigurable network topology, which can effectively adapt the dynamic changes in the environment
Apollonio, F I; Ballabeni, A.; M. Gaiani; F. Remondino
Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction...
The increased complexity and interconnectivity of flight deck automation has made the prediction of human–automation interaction (HAI) difficult and has resulted in a number of accidents and incidents. There is a need to develop objective and robust methods by which the changes in HAI brought about by the introduction of new automation into the flight deck could be predicted and assessed prior to implementation and without use of extensive simulation. This paper presents a method to model a parametrization of flight deck automation known as HART and link it to HAI consequences using a backpropagation neural network approach. The transformation of the HART into a computational model suitable for modeling as a neural network is described. To test and train the network data were collected from 40 airline pilots for six HAI consequences based on one scenario family consisting of a baseline and four variants. For a binary classification of HAI consequences, the neural network successfully classified 62–78.5% depending on the consequence. The results were verified using a decision tree analysis
This volume contains the papers presented at the 7th International Symposium on Automated Technology for Verification and Analysis held during October 13-16 in Macao SAR, China. The primary objective of the ATVA conferences remains the same: to exchange and promote the latest advances of state-of...
An automated safeguards system must be able to detect an anomalous event, identify the nature of the event, and recommend a corrective action. Neural networks represent a new way of thinking about basic computational mechanisms for intelligent information processing. In this paper, we discuss the issues involved in applying a neural network model to the first step of this process: anomaly detection in materials accounting systems. We extend our previous model to a 3-tank problem and compare different neural network architectures and algorithms. We evaluate the computational difficulties in training neural networks and explore how certain design principles affect the problems. The issues involved in building a neural network architecture include how the information flows, how the network is trained, how the neurons in a network are connected, how the neurons process information, and how the connections between neurons are modified. Our approach is based on the demonstrated ability of neural networks to model complex, nonlinear, real-time processes. By modeling the normal behavior of the processes, we can predict how a system should be behaving and, therefore, detect when an abnormality occurs
Fath, B.D.; Scharler, U.M.; Ulanowicz, R.E.; Hannon, B.
Ecological network analysis (ENA) is a systems-oriented methodology to analyze within system interactions used to identify holistic properties that are otherwise not evident from the direct observations. Like any analysis technique, the accuracy of the results is as good as the data available, but t
Praha : Katedra matematiky, FSv ČVUT v Praze, 2012, s. 19-20. [Aplikovaná matematika – Rektorysova soutěž. Praha (CZ), 07.12.2012] Institutional support: RVO:67985556 Keywords : Factor Analysis * Dynamic Sequence * Scintigraphy Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/AS/tichy-automated functional analysis in dynamic medical imaging.pdf
Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja
This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716
Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.
Full Text Available Glycosylation is among the most common and complex post-translational modifications identified to date. It proceeds through the catalytic action of multiple enzyme families that include the glycosyltransferases that add monosaccharides to growing glycans, and glycosidases which remove sugar residues to trim glycans. The expression level and specificity of these enzymes, in part, regulate the glycan distribution or glycome of specific cell/tissue systems. Currently, there is no systematic method to describe the enzymes and cellular reaction networks that catalyze glycosylation. To address this limitation, we present a streamlined machine-readable definition for the glycosylating enzymes and additional methodologies to construct and analyze glycosylation reaction networks. In this computational framework, the enzyme class is systematically designed to store detailed specificity data such as enzymatic functional group, linkage and substrate specificity. The new classes and their associated functions enable both single-reaction inference and automated full network reconstruction, when given a list of reactants and/or products along with the enzymes present in the system. In addition, graph theory is used to support functions that map the connectivity between two or more species in a network, and that generate subset models to identify rate-limiting steps regulating glycan biosynthesis. Finally, this framework allows the synthesis of biochemical reaction networks using mass spectrometry (MS data. The features described above are illustrated using three case studies that examine: i O-linked glycan biosynthesis during the construction of functional selectin-ligands; ii automated N-linked glycosylation pathway construction; and iii the handling and analysis of glycomics based MS data. Overall, the new computational framework enables automated glycosylation network model construction and analysis by integrating knowledge of glycan structure and enzyme
Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.
Full Text Available Wireless Sensor Networks (WSNs are gradually adopted in the industrial world due to their advantages over wired networks. In addition to saving cabling costs, WSNs widen the realm of environments feasible for monitoring. They thus add sensing and acting capabilities to objects in the physical world and allow for communication among these objects or with services in the future Internet. However, the acceptance of WSNs by the industrial automation community is impeded by open issues, such as security guarantees and provision of Quality of Service (QoS. To examine both of these perspectives, we select and survey relevant WSN technologies dedicated to industrial automation. We determine QoS requirements and carry out a threat analysis, which act as basis of our evaluation of the current state-of-the-art. According to the results of this evaluation, we identify and discuss open research issues.
Schmidt, Michael D.; Vallabhajosyula, Ravishankar R.; Jenkins, Jerry W.; Hood, Jonathan E.; Soni, Abhishek S.; Wikswo, John P.; Lipson, Hod
The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time.
The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model-–suggesting nonlinear terms and structural modifications–-or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time
Muhammad, Syed Agha; Van Laerhoven, Kristof
Generally, social network analysis has often focused on the topology of the network without considering the characteristics of individuals involved in them. Less attention is given to study the behavior of individuals, considering they are the basic entity of a graph. Given a mobile social network graph, what are good features to extract key information from the nodes?How many distinct neighborhood patterns exist for ego nodes? What clues does such information provide to study nodes over a lo...
Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor)
Spacecraft trend analysis is a vital mission operations function performed by satellite controllers and engineers, who perform detailed analyses of engineering telemetry data to diagnose subsystem faults and to detect trends that may potentially lead to degraded subsystem performance or failure in the future. It is this latter function that is of greatest importance, for careful trending can often predict or detect events that may lead to a spacecraft's entry into safe-hold. Early prediction and detection of such events could result in the avoidance of, or rapid return to service from, spacecraft safing, which not only results in reduced recovery costs but also in a higher overall level of service for the satellite system. Contemporary spacecraft trending activities are manually intensive and are primarily performed diagnostically after a fault occurs, rather than proactively to predict its occurrence. They also tend to rely on information systems and software that are oudated when compared to current technologies. When coupled with the fact that flight operations teams often have limited resources, proactive trending opportunities are limited, and detailed trend analysis is often reserved for critical responses to safe holds or other on-orbit events such as maneuvers. While the contemporary trend analysis approach has sufficed for current single-spacecraft operations, it will be unfeasible for NASA's planned and proposed space science constellations. Missions such as the Dynamics, Reconnection and Configuration Observatory (DRACO), for example, are planning to launch as many as 100 'nanospacecraft' to form a homogenous constellation. A simple extrapolation of resources and manpower based on single-spacecraft operations suggests that trending for such a large spacecraft fleet will be unmanageable, unwieldy, and cost-prohibitive. It is therefore imperative that an approach to automating the spacecraft trend analysis function be studied, developed, and applied to
Chapter 14 for the 2nd edition of the Handbook of Radioactivity Analysis. The techniques and examples described in this chapter demonstrate that modern fluidic techniques and instrumentation can be used to develop automated radiochemical separation workstations. In many applications, these can be mechanically simple and key parameters can be controlled from software. If desired, many of the fluidic components and solution can be located remotely from the radioactive samples and other hot sample processing zones. There are many issues to address in developing automated radiochemical separation that perform reliably time after time in unattended operation. These are associated primarily with the separation and analytical chemistry aspects of the process. The relevant issues include the selectivity of the separation, decontamination factors, matrix effects, and recoveries from the separation column. In addition, flow rate effects, column lifetimes, carryover from one sample to another, and sample throughput must be considered. Nevertheless, successful approaches for addressing these issues have been developed. Radiochemical analysis is required not only for processing nuclear waste samples in the laboratory, but also for at-site or in situ applications. Monitors for nuclear waste processing operations represent an at-site application where continuous unattended monitoring is required to assure effective process radiochemical separations that produce waste streams that qualify for conversion to stable waste forms. Radionuclide sensors for water monitoring and long term stewardship represent an application where at-site or in situ measurements will be most effective. Automated radiochemical analyzers and sensors have been developed that demonstrate that radiochemical analysis beyond the analytical laboratory is both possible and practical
M. I strate
Full Text Available Automated analysis of natural language texts is one of the most important knowledge discovery tasks for any organization. According to Gartner Group, almost 90% of knowledge available at an organization today is dispersed throughout piles of documents buried within unstructured text. Analyzing huge volumes of textual information is often involved in making informed and correct business decisions. Traditional analysis methods based on statistics fail to help processing unstructured texts and the society is in search of new technologies for text analysis. There exist a variety of approaches to the analysis of natural language texts, but most of them do not provide results that could be successfully applied in practice. This article concentrates on recent ideas and practical implementations in this area.
Poppe, Lawrence J.; Eliason, A.H.; Fredericks, J.J.
The Automated Particle Size Analysis System integrates a settling tube and an electroresistance multichannel particle-size analyzer (Coulter Counter) with a Pro-Comp/gg microcomputer and a Hewlett Packard 2100 MX(HP 2100 MX) minicomputer. This system and its associated software digitize the raw sediment grain-size data, combine the coarse- and fine-fraction data into complete grain-size distributions, perform method of moments and inclusive graphics statistics, verbally classify the sediment, generate histogram and cumulative frequency plots, and transfer the results into a data-retrieval system. This system saves time and labor and affords greater reliability, resolution, and reproducibility than conventional methods do.
Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff
A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.
August Betzler; Carles Gomez; Ilker Demirkol; Josep Paradells
Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible perfo...
The primary purpose of the current research was to develop an integrated approach by combining information compression methods and artificial neural networks for the monitoring of plant components using nondestructive examination data. Specifically, data from eddy current inspection of heat exchanger tubing were utilized to evaluate this technology. The focus of the research was to develop and test various data compression methods (for eddy current data) and the performance of different neural network paradigms for defect classification and defect parameter estimation. Feedforward, fully-connected neural networks, that use the back-propagation algorithm for network training, were implemented for defect classification and defect parameter estimation using a modular network architecture. A large eddy current tube inspection database was acquired from the Metals and Ceramics Division of ORNL. These data were used to study the performance of artificial neural networks for defect type classification and for estimating defect parameters. A PC-based data preprocessing and display program was also developed as part of an expert system for data management and decision making. The results of the analysis showed that for effective (low-error) defect classification and estimation of parameters, it is necessary to identify proper feature vectors using different data representation methods. The integration of data compression and artificial neural networks for information processing was established as an effective technique for automation of diagnostics using nondestructive examination methods
Perren, Gabriel I; Piatti, Andrés E
We present ASteCA (Automated Stellar Cluster Analysis), a suit of tools designed to fully automatize the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its unce...
Ling Bai; Ping Guo; Zhan-Yi Hu
An automated classification technique for large size stellar surveys is proposed. It uses the extended Kalman filter as a feature selector and pre-classifier of the data, and the radial basis function neural networks for the classification.Experiments with real data have shown that the correct classification rate can reach as high as 93%, which is quite satisfactory. When different system models are selected for the extended Kalman filter, the classification results are relatively stable. It is shown that for this particular case the result using extended Kalman filter is better than using principal component analysis.
王峥; 胡敏强; 郑建勇
This article firstly presents the basic analysis of the data generated from field measuring and controlling units (FMCU), using the simple method of probability theory and considering the particularity of middle or small integrated substation automation systems. Then detailed discussions on each segment of the program running in the FMCU are given. The limitation on CPU resources of FMCUs is studied, which is closely related with polling cycle. After each part of the polling cycle is determined, the expressions about the steady performances of field layer communicating network such as channel efficiency or average delay are given, on which the optimum number of FMCUs in such network is obtained.%针对中小型变电站自动化系统中通信网络的特点，首先采用概率统计的基本方法分析了现场测控单元（简称为FMCU）数据产生的规律，然后对FMCU内部的各个程序段进行了较为详细的讨论，确定了FMCU的CPU资源对轮询(polling)周期的约束条件。在逐一分析了轮询周期的组成部分之后，给出了在上述约束条件下的现场级通信网络稳态性能指标，即信道利用率、数据平均发送延时与轮询周期及FMCU数目之间相互关系的表达式。在此基础上提出了确定通信网络最佳FMCU配置数目的方法。
Farace, Richard V.; Mabee, Timothy
This paper reviews a variety of analytic procedures that can be applied to network data, discussing the assumptions and usefulness of each procedure when applied to the complexity of human communication. Special attention is paid to the network properties measured or implied by each procedure. Factor analysis and multidimensional scaling are among…
The methods of approach developed with a view to automatic processing of the different stages of chromosome analysis are described in this study divided into three parts. Part 1 relates the study of automated selection of metaphase spreads, which operates a decision process in order to reject ail the non-pertinent images and keep the good ones. This approach has been achieved by Computing a simulation program that has allowed to establish the proper selection algorithms in order to design a kit of electronic logical units. Part 2 deals with the automatic processing of the morphological study of the chromosome complements in a metaphase: the metaphase photographs are processed by an optical-to-digital converter which extracts the image information and writes it out as a digital data set on a magnetic tape. For one metaphase image this data set includes some 200 000 grey values, encoded according to a 16, 32 or 64 grey-level scale, and is processed by a pattern recognition program isolating the chromosomes and investigating their characteristic features (arm tips, centromere areas), in order to get measurements equivalent to the lengths of the four arms. Part 3 studies a program of automated karyotyping by optimized pairing of human chromosomes. The data are derived from direct digitizing of the arm lengths by means of a BENSON digital reader. The program supplies' 1/ a list of the pairs, 2/ a graphic representation of the pairs so constituted according to their respective lengths and centromeric indexes, and 3/ another BENSON graphic drawing according to the author's own representation of the chromosomes, i.e. crosses with orthogonal arms, each branch being the accurate measurement of the corresponding chromosome arm. This conventionalized karyotype indicates on the last line the really abnormal or non-standard images unpaired by the program, which are of special interest for the biologist. (author)
In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more com
Smith, D.; Dieterly, D. L.
Current thought and research positions which may allow for an improved capability to understand the impact of introducing automation to an existing system are established. The orientation was toward the type of studies which may provide some general insight into automation; specifically, the impact of automation in human performance and the resulting system performance. While an extensive number of articles were reviewed, only those that addressed the issue of automation and human performance were selected to be discussed. The literature is organized along two dimensions: time, Pre-1970, Post-1970; and type of approach, Engineering or Behavioral Science. The conclusions reached are not definitive, but do provide the initial stepping stones in an attempt to begin to bridge the concept of automation in a systematic progression.
INETEC Institute for Nuclear Technology developed software package called Eddy One which has option of automated analysis of bobbin coil eddy current data. During its development and on site use, many valuable lessons were learned which are described in this article. In accordance with previous, the following topics are covered: General requirements for automated analysis of bobbin coil eddy current data; Main approaches to automated analysis; Multi rule algorithms for data screening; Landmark detection algorithms as prerequisite for automated analysis (threshold algorithms and algorithms based on neural network principles); Field experience with Eddy One software; Development directions (use of artificial intelligence with self learning abilities for indication detection and sizing); Automated analysis software qualification; Conclusions. Special emphasis is given on results obtained on different types of steam generators, condensers and heat exchangers. Such results are then compared with results obtained by other automated software vendors giving clear advantage to INETEC approach. It has to be pointed out that INETEC field experience was collected also on WWER steam generators what is for now unique experience.(author)
Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.
Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.
Reddick, W E; Glass, J O; Cook, E N; Elkin, T D; Deaton, R J
We present a fully automated process for segmentation and classification of multispectral magnetic resonance (MR) images. This hybrid neural network method uses a Kohonen self-organizing neural network for segmentation and a multilayer backpropagation neural network for classification. To separate different tissue types, this process uses the standard T1-, T2-, and PD-weighted MR images acquired in clinical examinations. Volumetric measurements of brain structures, relative to intracranial volume, were calculated for an index transverse section in 14 normal subjects (median age 25 years; seven male, seven female). This index slice was at the level of the basal ganglia, included both genu and splenium of the corpus callosum, and generally, showed the putamen and lateral ventricle. An intraclass correlation of this automated segmentation and classification of tissues with the accepted standard of radiologist identification for the index slice in the 14 volunteers demonstrated coefficients (ri) of 0.91, 0.95, and 0.98 for white matter, gray matter, and ventricular cerebrospinal fluid (CSF), respectively. An analysis of variance for estimates of brain parenchyma volumes in five volunteers imaged five times each demonstrated high intrasubject reproducibility with a significance of at least p < 0.05 for white matter, gray matter, and white/gray partial volumes. The population variation, across 14 volunteers, demonstrated little deviation from the averages for gray and white matter, while partial volume classes exhibited a slightly higher degree of variability. This fully automated technique produces reliable and reproducible MR image segmentation and classification while eliminating intra- and interobserver variability. PMID:9533591
Abhinav Talgeri; Abheesh Kumar B A
This paper describes an investigation into the potential for remote controlled operation of home automation (also called as Domotics) systems. It considers problems with their implementation, discusses possible solutions through various network technologies and indicates how to optimize the use of such systems. This paper emphasizes on the design and prototype implementation of new home automation system that uses WiFi technology as a network infrastructure connecting its part...
Burcin Ozcan; Pooran Negi; Fernanda Laezza; Manos Papadakis; Demetrio Labate
Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons ...
Murthy, Chaitanya R.; Gao, Bo; Tao, Andrea R.; Arya, Gaurav
The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated manner. The software outputs averages and distributions in the size, radius of gyration, fractal dimension, backbone length, end-to-end distance, anisotropic ratio, and aspect ratio of NP clusters as a function of time along with bootstrapped error bounds for all calculated properties. The polydispersity in the NP building blocks and biases in the sampling of NP clusters are accounted for through the use of probabilistic weights. This software, named Particle Image Characterization Tool (PICT), has been made publicly available and could be an invaluable resource for researchers studying NP assembly. To demonstrate its practical utility, we used PICT to analyze scanning electron microscopy images taken during the assembly of surface-functionalized metal NPs of differing shapes and sizes within a polymer matrix. PICT is used to characterize and analyze the morphology of NP clusters, providing quantitative information that can be used to elucidate the physical mechanisms governing NP assembly.The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated
Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Call, Jared; Mercado, Marisol
The DSN Scheduling Engine (DSE) has been developed to increase the level of automated scheduling support available to users of NASA s Deep Space Network (DSN). We have adopted a request-driven approach to DSN scheduling, in contrast to the activity-oriented approach used up to now. Scheduling requests allow users to declaratively specify patterns and conditions on their DSN service allocations, including timing, resource requirements, gaps, overlaps, time linkages among services, repetition, priorities, and a wide range of additional factors and preferences. The DSE incorporates a model of the key constraints and preferences of the DSN scheduling domain, along with algorithms to expand scheduling requests into valid resource allocations, to resolve schedule conflicts, and to repair unsatisfied requests. We use time-bounded systematic search with constraint relaxation to return nearby solutions if exact ones cannot be found, where the relaxation options and order are under user control. To explore the usability aspects of our approach we have developed a graphical user interface incorporating some crucial features to make it easier to work with complex scheduling requests. Among these are: progressive revelation of relevant detail, immediate propagation and visual feedback from a user s decisions, and a meeting calendar metaphor for repeated patterns of requests. Even as a prototype, the DSE has been deployed and adopted as the initial step in building the operational DSN schedule, thus representing an important initial validation of our overall approach. The DSE is a core element of the DSN Service Scheduling Software (S(sup 3)), a web-based collaborative scheduling system now under development for deployment to all DSN users.
Oke Alice O
Full Text Available The making of an effective fuel measuring system has been a great challenge in the Nigerian industry, as various oil organization are running into different problems ranging from fire outbreak, oil pilfering, oil spillage and some other negative effects. The use of meter rule or long rod at most petrol filling stations for quantity assessment of fuel in tank is inefficient, stressful, dangerous and almost impossible in a networking environment. This archaic method does not provide good reorder date and does not give a good inventory. As such there is a need to automate the system by providing a real time measurement of fuel storage device to meet the demand of the customers. In this paper, a system was designed to sense the level of fuel in a networked tanks using a capacitive sensor controlled by an ATMEGA 328 Arduino microcontroller. The result was automated both in digital and analogue form through radio frequency Transmission using XBee and interfaced to Computer System for notification of fuel level and refill operations. This enables consumption control, cost analysis and tax accounting for fuel purchases
Piette, Mary Ann; Brown, Richard; Price, Phillip; Page, Janie; Granderson, Jessica; Riess, David; Czarnecki, Stephen; Ghatikar, Girish; Lanzisera, Steven
The Transactional Network Project is a multi-lab activity funded by the US Department of Energy?s Building Technologies Office. The project team included staff from Lawrence Berkeley National Laboratory, Pacific Northwest National Laboratory and Oak Ridge National Laboratory. The team designed, prototyped and tested a transactional network (TN) platform to support energy, operational and financial transactions between any networked entities (equipment, organizations, buildings, grid, etc.). PNNL was responsible for the development of the TN platform, with agents for this platform developed by each of the three labs. LBNL contributed applications to measure the whole-building electric load response to various changes in building operations, particularly energy efficiency improvements and demand response events. We also provide a demand response signaling agent and an agent for cost savings analysis. LBNL and PNNL demonstrated actual transactions between packaged rooftop units and the electric grid using the platform and selected agents. This document describes the agents and applications developed by the LBNL team, and associated tests of the applications.
It introduces a kind of Area γ Radiation Monitoring Network Systems based on Totally Integrated Automation. It features simple and safe process control, easy integration of information network, field bus and field instrumentation, modular design and powerful system expansion, implements management and control integration, is positive importance for localization of Radiation Monitoring System. (authors)
... 32 National Defense 6 2010-07-01 2010-07-01 false Telecommunications automated information systems and network security. 2001.50 Section 2001.50 National Defense Other Regulations Relating to National... network security. Each agency head shall ensure that classified information electronically...
Russell, J.A.G.; Alexoff, D.L.; Wolf, A.P.
This presentation describes an evolving distributed microprocessor network for automating the routine production synthesis of radiotracers used in Positron Emission Tomography. We first present a brief overview of the PET method for measuring biological function, and then outline the general procedure for producing a radiotracer. The paper identifies several reasons for our automating the syntheses of these compounds. There is a description of the distributed microprocessor network architecture chosen and the rationale for that choice. Finally, we speculate about how this network may be exploited to extend the power of the PET method from the large university or National Laboratory to the biomedical research and clinical community at large. 20 refs. (DT)
Russell, J. A. G.; Alexoff, D. L.; Wolf, A. P.
This presentation describes an evolving distributed microprocessor network for automating the routine production synthesis of radiotracers used in Positron Emission Tomography. We first present a brief overview of the PET method for measuring biological function, and then outline the general procedure for producing a radiotracer. The paper identifies several reasons for our automating the syntheses of these compounds. There is a description of the distributed microprocessor network architecture chosen and the rationale for that choice. Finally, we speculate about how this network may be exploited to extend the power of the PET method from the large university or National Laboratory to the biomedical research and clinical community at large. (DT)
This presentation describes an evolving distributed microprocessor network for automating the routine production synthesis of radiotracers used in Positron Emission Tomography. We first present a brief overview of the PET method for measuring biological function, and then outline the general procedure for producing a radiotracer. The paper identifies several reasons for our automating the syntheses of these compounds. There is a description of the distributed microprocessor network architecture chosen and the rationale for that choice. Finally, we speculate about how this network may be exploited to extend the power of the PET method from the large university or National Laboratory to the biomedical research and clinical community at large. 20 refs. (DT)
Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, Hildur; Hansen, I. V.; Hove, H. D.; Christensen, L.; Rueckert, D.; Kreiborg, S.
We present an automated method of spatially detailed 3D asymmetry quantification in mandibles extracted from CT and apply it to a population of infants with unilateral coronal synostosis (UCS). An atlas-based method employing non-rigid registration of surfaces is used for determining deformation...
Gary Casuccio (RJ Lee Group); Michael Potter (RJ Lee Group); Fred Schwerer (RJ Lee Group); Dr. Richard J. Fruehan (Carnegie Mellon University); Dr. Scott Story (US Steel)
The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment
The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment/steel cleanliness; slab, billet
Keller, Paul E.; Kangas, Lars J.; Hayes, James C.; Schrom, Brian T.; Suarez, Reynold; Hubbard, Charles W.; Heimbigner, Tom R.; McIntyre, Justin I.
Artificial neural networks (ANNs) are used to determine the state-of-health (SOH) of the Automated Radioxenon Analyzer/Sampler (ARSA). ARSA is a gas collection and analysis system used for non-proliferation monitoring in detecting radioxenon released during nuclear tests. SOH diagnostics are important for automated, unmanned sensing systems so that remote detection and identification of problems can be made without onsite staff. Both recurrent and feed-forward ANNs are presented. The recurrent ANN is trained to predict sensor values based on current valve states, which control air flow, so that with only valve states the normal SOH sensor values can be predicted. Deviation between modeled value and actual is an indication of a potential problem. The feed-forward ANN acts as a nonlinear version of principal components analysis (PCA) and is trained to replicate the normal SOH sensor values. Because of ARSA's complexity, this nonlinear PCA is better able to capture the relationships among the sensors than standard linear PCA and is applicable to both sensor validation and recognizing off-normal operating conditions. Both models provide valuable information to detect impending malfunctions before they occur to avoid unscheduled shutdown. Finally, the ability of ANN methods to predict the system state is presented.
Artificial neural networks (ANNs) are used to determine the state-of-health (SOH) of the Automated Radioxenon Analyzer/Sampler (ARSA). ARSA is a gas collection and analysis system used for non-proliferation monitoring in detecting radioxenon released during nuclear tests. SOH diagnostics are important for automated, unmanned sensing systems so that remote detection and identification of problems can be made without onsite staff. Both recurrent and feed-forward ANNs are presented. The recurrent ANN is trained to predict sensor values based on current valve states, which control air flow, so that with only valve states the normal SOH sensor values can be predicted. Deviation between modeled value and actual is an indication of a potential problem. The feed-forward ANN acts as a nonlinear version of principal components analysis (PCA) and is trained to replicate the normal SOH sensor values. Because of ARSA's complexity, this nonlinear PCA is better able to capture the relationships among the sensors than standard linear PCA and is applicable to both sensor validation and recognizing off-normal operating conditions. Both models provide valuable information to detect impending malfunctions before they occur to avoid unscheduled shutdown. Finally, the ability of ANN methods to predict the system state is presented
Keller, Paul E.; Kangas, Lars J.; Hayes, James C.; Schrom, Brian T.; Suarez, Reynold; Hubbard, Charles W.; Heimbigner, Tom R.; McIntyre, Justin I.
Artificial neural networks (ANNs) are used to determine the state-of-health (SOH) of the Automated Radioxenon Analyzer/Sampler (ARSA). ARSA is a gas collection and analysis system used for non-proliferation monitoring in detecting radioxenon released during nuclear tests. SOH diagnostics are important for automated, unmanned sensing systems so that remote detection and identification of problems can be made without onsite staff. Both recurrent and feed-forward ANNs are presented. The recurrent ANN is trained to predict sensor values based on current valve states, which control air flow, so that with only valve states the normal SOH sensor values can be predicted. Deviation between modeled value and actual is an indication of a potential problem. The feed-forward ANN acts as a nonlinear version of principal components analysis (PCA) and is trained to replicate the normal SOH sensor values. Because of ARSA’s complexity, this nonlinear PCA is better able to capture the relationships among the sensors than standard linear PCA and is applicable to both sensor validation and recognizing off-normal operating conditions. Both models provide valuable information to detect impending malfunctions before they occur to avoid unscheduled shutdown. Finally, the ability of ANN methods to predict the system state is presented.
The book presents some key mathematical tools for the performance analysis of communication networks and computer systems.Communication networks and computer systems have become extremely complex. The statistical resource sharing induced by the random behavior of users and the underlying protocols and algorithms may affect Quality of Service.This book introduces the main results of queuing theory that are useful for analyzing the performance of these systems. These mathematical tools are key to the development of robust dimensioning rules and engineering methods. A number of examples i
Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters
Chittenden Thomas W
Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.
Full Text Available Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS, where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i image segmentation based on support vector machines with specially designed multiscale filters; (ii soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma's surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications.
Network Systems Security Analysis has utmost importance in today's world. Many companies, like banks which give priority to data management, test their own data security systems with "Penetration Tests" by time to time. In this context, companies must also test their own network/server systems and take precautions, as the data security draws attention. Based on this idea, the study cyber-attacks are researched throughoutly and Penetration Test technics are examined. With these information on, classification is made for the cyber-attacks and later network systems' security is tested systematically. After the testing period, all data is reported and filed for future reference. Consequently, it is found out that human beings are the weakest circle of the chain and simple mistakes may unintentionally cause huge problems. Thus, it is clear that some precautions must be taken to avoid such threats like updating the security software.
Xu Hong-lin; Yan Han-bing; Gao Cui-fang; Zhu Ping
Based on the community structure characteristics, theory, and methods of frequent subgraph mining, network motifs findings are firstly introduced into social network analysis; the tendentiousness evaluation function and the importance evaluation function are proposed for effectiveness assessment. Compared with the traditional way based on nodes centrality degree, the new approach can be used to analyze the properties of social network more fully and judge the roles of the nodes effectively. I...
JangMook Kang; Woojin Lee; Juil Kim
In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications us...
Castro Lechtaler, Antonio; Liporace, Julio César; Cipriano, Marcelo; García, Edith; Maiorano, Ariel; Malvacio, Eduardo; Tapia, Néstor
An updated version of a tool for automated analysis of source code patches and branch differences is presented. The upgrade involves the use of machine learning techniques on source code, comments, and messages. It aims to help analysts, code reviewers, or auditors perform repetitive tasks continuously. The environment designed encourages collaborative work. It systematizes certain tasks pertaining to reviewing or auditing processes. Currently, the scope of the automated test is limited. C...
Xianhui Yi; Dongbo Yan; Huanbin Liu; Jigeng Li
This paper presents an automated testing network system for paper laboratory based on CAN bus. The overall architecture, hardware interface and software function are discussed in detail. It is indicated through experiment that the system can collect,analyze and store the test results from the various measuring instruments in the paper lab automatically.The simple, reliable, low-cost measuring automation system will have a prosperous application in the future paper industry.
Kingdon, J. C.
This thesis presents an automated system for financial time series modelling. Formal and applied methods are investigated for combining feed-forward Neural Networks and Genetic Algorithms (GAs) into a single adaptive/learning system for automated time series forecasting. Four important research contributions arise from this investigation: i) novel forms of GAs are introduced which are designed to counter the representational bias associated with the conventional Holland GA, ii) an...
Vivek Kumar Yadav
Full Text Available Network security checking is a vital process to assess and to identify weaknesses in network for management of security. Insecure entry points of a network provide attackers an easy target to access and compromise. Open ports of network components such as firewalls, gateways and end systems are analogues to open gates of a building through which any one can get into. Network scanning is performed to identify insecure entry points in the network components. To find out vulnerabilities on these points vulnerability assessment is performed. So security checking consists of both activities- network scanning as well as vulnerability assessment. A single tool used for the security checking may not give reliable results. This paper presents a framework for assessing the security of a network using multiple Network Scanning and Vulnerability Assessment tools. The proposed framework is an extension of the framework given by Jun Yoon and Wontae Sim  which performs vulnerability scanning only. The framework presented here adds network scanning, alerting and reporting system to their framework. Network scanning and vulnerability tools together complement each other and make it amenable for centralized control and management. The reporting system of framework sends an email to the network administrator which contains detailed report (as attachment of security checking process. Alerting system sends a SMS message as an alert to the network administrator in case of severe threats found in the network. Initial results of the framework are encouraging and further work is in progress.
This textbook presents the mathematical theory and techniques necessary for analyzing and modeling high-performance global networks, such as the Internet. The three main building blocks of high-performance networks are links, switching equipment connecting the links together, and software employed at the end nodes and intermediate switches. This book provides the basic techniques for modeling and analyzing these last two components. Topics covered include, but are not limited to: Markov chains and queuing analysis, traffic modeling, interconnection networks and switch architectures and buffering strategies. · Provides techniques for modeling and analysis of network software and switching equipment; · Discusses design options used to build efficient switching equipment; · Includes many worked examples of the application of discrete-time Markov chains to communication systems; · Covers the mathematical theory and techniques necessary for ana...
This thesis describes the design and implementation of a system to extract meaning from natural language specifications of digital systems. This research is part of the ASPIN project which has the long-term goal of providing an automated system for digital system synthesis from informal specifications. This work makes several contributions, one being the application of artificial intelligence techniques to specifications writing. Also, the work deals with the subset of the Engl...
Full Text Available The simulation of large-scale networks is a challenging task especially if the network to simulate is the Dynamic Multipoint Virtual Private Network, it requires expert knowledge to properly configure its component technologies. The study of these network architectures in a real environment is almost impossible because it requires a very large number of equipment, however, this task is feasible in a simulation environment like OPNET Modeler, provided to master both the tool and the different architectures of the Dynamic Multipoint Virtual Private Network. Several research studies have been conducted to automate the generation and simulation of complex networks under various simulators, according to our research no work has dealt with the Dynamic Multipoint Virtual Private Network. In this paper we present a simulation model of the Dynamic and Multipoint Virtual Private network in OPNET Modeler, and a WEB-based tool for project management on the same network.
The efficient and effective monitoring of mobile networks is vital given the number of users who rely on such networks and the importance of those networks. The purpose of this paper is to present a monitoring scheme for mobile networks based on the use of rules and decision tree data mining classifiers to upgrade fault detection and handling. The goal is to have optimisation rules that improve anomaly detection. In addition, a monitoring scheme that relies on Bayesian classifiers...
The efficient and effective monitoring of mobile networks is vital given the number of users who rely on such networks and the importance of those networks. The purpose of this paper is to present a monitoring scheme for mobile networks based on the use of rules and decision tree data mining classifiers to upgrade fault detection and handling. The goal is to have optimisation rules that improve anomaly detection. In addition, a monitoring scheme that relies on Bayesian classifiers was also im...
An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies
Morrell-Falvey, J. L.; Qi, H.; Doktycz, M. J.; Venkatraman, S.
The identification of protein interactions is important for elucidating biological networks. One obstacle in comprehensive interaction studies is the analyses of large datasets, particularly those containing images. Development of an automated system to analyze an image-based protein interaction dataset is needed. Such an analysis system is described here, to automatically extract features from fluorescence microscopy images obtained from a bacterial protein interaction assay. These features ...
Thimm, Heiko; Rasmussen, Karsten Boye
Successful collaboration in business networks calls for well-informed network participants. Members who know about the many aspects of the network are an effective vehicle to successfully resolve conflicts, build a prospering collaboration climate and promote trust within the network....... The importance of well-informed network participants has led to the concept of network participant informedness which is derived from existing theories and concepts for firm informedness. It is possible to support and develop well-informed network participants through a specialised IT-based active information...... provisioning service. This article presents a corresponding modelling framework and a rule-based approach for the active system capabilities required. Details of a prototype implementation building on concepts of the research area of active databases are also reported....
A program for automated task analysis is described. Called TAPS (task analysis profiling system), the program accepts normal English prose and outputs skills, knowledges, attitudes, and abilities (SKAAs) along with specific guidance and recommended ability measurement tests for nuclear power plant operators. A new method for defining SKAAs is presented along with a sample program output
Stegmann, Mikkel Bille; Skoglund, Karl
Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation. The prese...
This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a
Michalenko, Ashley Christine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs
Full Text Available The efficient and effective monitoring of mobile networks is vital given the number of users who rely on such networks and the importance of those networks. The purpose of this paper is to present a monitoring scheme for mobile networks based on the use of rules and decision tree data mining classifiers to upgrade fault detection and handling. The goal is to have optimisation rules that improve anomaly detection. In addition, a monitoring scheme that relies on Bayesian classifiers was also implemented for the purpose of fault isolation and localisation. The data mining techniques described in this paper are intended to allow a system to be trained to actually learn network fault rules. The results of the tests that were conducted allowed for the conclusion that the rules were highly effective to improve network troubleshooting.
MA Liwei; SUN Yihe
Network-on-chip (NoC) is a new design paradigm for system-on-chip intraconnections in the billion-transistor era. Application specific on-chip network design is essential for NoC success in this new era.This paper presents a class of source routing switch that can be used to efficiently form arbitrary network topologies and that can be optimized for various applications. Hardware description language versions of the networks can be generated automatically for simulations and for syntheses. A series of switches and networks has been configured with their performances including latency, delay, area, and power, and analyzed theoretically and experimentally. The results show that this NoC architecture provides a large design space for application specific on-chip network designs.
Johnson, D.; Xu, L.; Li, J.; Yuan, G.; Sun, X.; Zhu, Z.; Tang, X.; Velgersdyk, M.; Beaty, K.; Fratini, G.; Kathilankal, J. C.; Burba, G. G.
The significant increase in overall data generation and available computing power in the recent years has greatly improved spatial and temporal data coverage of evapotranspiration (ET) measurements on multiple scales, ranging from a single station to continental scale ET networks. With the increased number of ET stations and increased amount of data flowing from each station, modern tools are needed to effectively and efficiently handle the entire infrastructure (hardware, software and data management). These tools can automate key stages of ET network operation, remotely providing real-time ET rates and alerts for the health of the instruments. This can help maximize time dedicated to answering research questions, rather than to station management. This year, the Chinese Ecosystem Research Network (CERN) within the Chinese Academy of Sciences implemented a large-scale 27-station national ET network across China to measure and understand the water cycle from a variety of ecosystems. It includes automated eddy covariance systems, on-site flux computations, wireless communication, and a network server for system, data, and user management. This presentation will discuss the latest information on the CERN network, methods and hardware for ET measurements, tools for automated data collection, data processing and quality control, and data transport and management of the multiple stations. This system description is beneficial for individuals and institutions interested in setting up or modifying present ET networks consisting of single or multiple stations spread over geographic locations ranging from single field site or watershed to national or continental scale.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool. PMID:27047732
In this paper we carry out a comparative analysis of the word network as the collaboration network based on the novel by M. Bulgakov 'Master and Margarita', the synonym network of the Russian language as well as the Russian movie actor network. We have constructed one-mode projections of these networks, defined degree distributions for them and have calculated main characteristics. In the paper a generation algorithm of collaboration networks has been offered which allows one to generate networks statistically equivalent to the studied ones. It lets us reveal a structural correlation between word network, synonym network and movie actor network. We show that the degree distributions of all analyzable networks are described by the distribution of q-type.
In recent years the demand for information about the distribution of elements at trace concentration levels in high purity materials and in biological, environmental and geological specimens has increased greatly. Neutron activation analysis can play an important role in obtaining the required information. Radiochemical separations are required in many of the applications mentioned. A critical review of the progress made over the last 15 years in the development and application of radiochemical separation schemes for multielement activation analysis and in their automation is presented. About 80 radiochemical separation schemes are reviewed. Advantages and disadvantages of the automation of radiochemical separations are critically analysed. The various machines developed are illustrated and technical suggestions for the development of automated machines are given. (author)
Du Toit, Jan Valentine
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots. This methodology involves subjective human judgment, is time consuming, and can result in suboptimal results. The newly developed automated construction algorithm solves these difficulties by performing mod...
Jules Teulade-Denantes; Adrien Maudet; Cécile Duchêne
This paper tackles the representation of routes carried by a physical network infrastructure on a map. In particular, the paper examines the case where each route is represented by a separate colored linear symbol offset from the physical network segments and from other routes—as on public transit maps with bus routes offset from roads. In this study, the objective is to automate the placement of such route symbols while maximizing their legibility, especially at junctions. The problem is mod...
Solberg Hjorth, Theis; Madsen, Per Printz; Torbensen, Rune Sonnich
Current wireless technologies use a variety of methods to locally exchange and verify credentials between devices to establish trusted relationships. Scenarios in home automation networks also require this capability over the Internet, but the necessary involvement of non-expert users to setup th...
The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.
Evers, L.; Havinga, P.J.M.; Kuper, J.; Lijding, M.E.M.; Meratnia, N.
The supply chain management business can benefit greatly from automation, as recent developments with RFID technology shows. The use of Wireless Sensor Network technology promises to bring the next leap in efficiency and quality of service. However, current WSN system software does not yet provide t
Full Text Available Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network.
The Department of Energy (DOE) has significant amounts of radioactive and hazardous wastes stored, buried, and still being generated at many sites within the United States. These wastes must be characterized to determine the elemental, isotopic, and compound content before remediation can begin. In this paper, the authors project that sampling requirements will necessitate generating more than 10 million samples by 1995, which will far exceed the capabilities of our current manual chemical analysis laboratories. The Contaminant Analysis Automation effort (CAA), with Los Alamos National Laboratory (LANL) as to the coordinating Laboratory, is designing and fabricating robotic systems that will standardize and automate both the hardware and the software of the most common environmental chemical methods. This will be accomplished by designing and producing several unique analysis systems called Standard Analysis Methods (SAM). Each SAM will automate a specific chemical method, including sample preparation, the analytical analysis, and the data interpretation, by using a building block known as the Standard Laboratory Module (SLM). This concept allows the chemist to assemble an automated environmental method using standardized SLMs easily and without the worry of hardware compatibility or the necessity of generating complicated control programs
Full Text Available A global vertically resolved aerosol data set covering more than 10 years of observations at more than 20 measurement sites distributed from 63° N to 52° S and 72° W to 124° E has been achieved within the Raman and polarization lidar network PollyNET. This network consists of portable, remote-controlled multiwavelength-polarization-Raman lidars (Polly for automated and continuous 24/7 observations of clouds and aerosols. PollyNET is an independent, voluntary, and scientific network. All Polly lidars feature a standardized instrument design and apply unified calibration, quality control, and data analysis. The observations are processed in near-real time without manual intervention, and are presented online at http://polly.tropos.de. The paper gives an overview of the observations on four continents and two research vessels obtained with eight Polly systems. The specific aerosol types at these locations (mineral dust, smoke, dust-smoke and other dusty mixtures, urban haze, and volcanic ash are identified by their Ångström exponent, lidar ratio, and depolarization ratio. The vertical aerosol distribution at the PollyNET locations is discussed on the basis of more than 55 000 automatically retrieved 30 min particle backscatter coefficient profiles at 532 nm. A seasonal analysis of measurements at selected sites revealed typical and extraordinary aerosol conditions as well as seasonal differences. These studies show the potential of PollyNET to support the establishment of a global aerosol climatology that covers the entire troposphere.
Nagare Vrushali M
Full Text Available Fresh water is the basic need of living organisms on earth. The fresh water is consumed by living beings to be alive including plants and animals. The amount of fresh water available is limited.Also; population has increased as compared to available water and food resources. Agriculture consumes about 85% of the total fresh water quantity available and hence, there is an urgent need to create strategies based on science and technology for sustainable use of water, including technical, agronomic, managerial & institutional improvements. There are many systems using various techniques to achieve water savings in various agricultural practices. The system using remote access and wireless communication is discussed in this paper. The system explained here is a n etwork of wireless sensors and a wireless base station to process the sensor data to automate the irrigation system. The sensors are soil moisture sensor and soil temperature sensor. The Base station microcontroller is programmed such that if the eit her so il moisture or temperature parameters cross a predefined threshold level, the irrigation system is automated, i.e. the motor relay that is connected to water pump, switches to ON otherwise OFF.
Kim, Richard Y.; Drake, Keith C.; Kim, Tony Y.
A hierarchical recognition methodology using abductive networks at several levels of object recognition is presented. Abductive networks--an innovative numeric modeling technology using networks of polynomial nodes--results from nearly three decades of application research and development in areas including statistical modeling, uncertainty management, genetic algorithms, and traditional neural networks. The systems uses pixel-registered multisensor target imagery provided by the Tri-Service Laser Radar sensor. Several levels of recognition are performed using detection, classification, and identification, each providing more detailed object information. Advanced feature extraction algorithms are applied at each recognition level for target characterization. Abductive polynomial networks process feature information and situational data at each recognition level, providing input for the next level of processing. An expert system coordinates the activities of individual recognition modules and enables employment of heuristic knowledge to overcome the limitations provided by a purely numeric processing approach. The approach can potentially overcome limitations of current systems such as catastrophic degradation during unanticipated operating conditions while meeting strict processing requirements. These benefits result from implementation of robust feature extraction algorithms that do not take explicit advantage of peculiar characteristics of the sensor imagery, and the compact, real-time processing capability provided by abductive polynomial networks.
Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.
Cao, Jianfang; Chen, Lichao
With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818
Complex networks have recently attracted much attention in diverse areas of science and technology. Many networks such as the WWW and biological networks are known to display spatial heterogeneity which can be characterized by their fractal dimensions. Multifractal analysis is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we introduce a new box-covering algorithm for multifractal analysis of complex networks. This algorithm is used to calculate the generalized fractal dimensions Dq of some theoretical networks, namely scale-free networks, small world networks, and random networks, and one kind of real network, namely protein—protein interaction networks of different species. Our numerical results indicate the existence of multifractality in scale-free networks and protein—protein interaction networks, while the multifractal behavior is not clear-cut for small world networks and random networks. The possible variation of Dq due to changes in the parameters of the theoretical network models is also discussed. (general)
Blumenstein, Michael; Green, Steve; Fogelman, Shoshana; Nguyen, Ann; Muthukkumarasamy, Vallipuram
This paper describes the Generic Automated Marking Environment (GAME) and provides a detailed analysis of its performance in assessing student programming projects and exercises. GAME has been designed to automatically assess programming assignments written in a variety of languages based on the "structure" of the source code and the correctness…
Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill
Purpose: Conventional resource-intensive methods for child phonetic development studies are often impractical for sampling and analyzing child vocalizations in sufficient quantity. The purpose of this study was to provide new information on early language development by an automated analysis of child phonetic production using naturalistic…
Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thompson, Adam B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Joshua L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.
In this work, Automated Scanning Electron Microscopy with Energy Dispersive X-ray Spectrometry (SEM-EDS) was used to characterize 7.65 and 9mm cartridges Turkish ammunition. All samples were analyzed in a SEM Jeol JSM-5600LV equipped BSE detector and a Link ISIS 300 (EDS). A working distance of 20mm, an accelerating voltage of 20 keV and gunshot residue software was used in all analysis. Automated search resulted in a high number of particles analyzed containing gunshot residues (GSR) unique elements (PbBaSb). The obtained data about the definition of characteristic GSR particles was concordant with other studies on this topic
Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures
Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot
Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.
Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering
With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.
Saeed M. Agbariah
Full Text Available As networks continue to grow in size, speed and complexity, as well as in the diversification of their services, they require many ad-hoc configuration changes. Such changes may lead to potential configuration errors, policy violations, inefficiencies, and vulnerable states. The current Network Management landscape is in a dire need for an automated process to prioritize and manage risk, audit configurations against internal policies or external best practices, and provide centralize reporting for monitoring and regulatory purposes in real time. This paper defines a framework for automated configuration process with a policy compliance and change detection system, which performs automatic and intelligent network configuration audits by using pre-defined configuration templates and library of rules that encompass industry standards for various routing and security related guidelines.System administrators and change initiators will have a real time feedback if any of their configuration changes violate any of the policies set for any given device.
Full Text Available This paper describes an investigation into the potential for remote controlled operation of home automation (also called as Domotics systems. It considers problems with their implementation, discusses possible solutions through various network technologies and indicates how to optimize the use of such systems. This paper emphasizes on the design and prototype implementation of new home automation system that uses WiFi technology as a network infrastructure connecting its parts. The proposed can be viewed on two fold; the first part is the software (web server, which presents system core that manages, controls, and monitors users’ home. Users and system administrator can locally (LAN or remotely (internet manage the system code. Second part is hardware interface module, which provides appropriate interface to sensors and actuator of home automation system. Unlike most of available home automation system in the market the proposed system is scalable that one server can manage many hardware interface modules as long as it exists on WiFi network coverage.
Mohamed Najeh Lakhoua
Full Text Available The aim of this paper is, firstly, to recall the basic concepts of SCADA (Supervisory Control And Data Acquisition systems, to present the project management phases of SCADA for real time implementation, and then to show the need of the automation for Electricity Distribution Companies (EDC on their distribution networks and the importance of using computer based system towards sustainable development of their services. A proposed computer based power distribution automation system is then discussed. Finally, some projects of SCADA system implementation in electrical companies over the world are briefly presented.
An in-field sensor-based irrigation system is of benefit to producers in efficient water management. A distributed wireless sensor network eliminates difficulties to wire sensor stations across the field and reduces maintenance cost. Implementing wireless sensor-based irrigation system is challengin...
Ramazani, Roseanna B.; Harish R Krishnan; BERGESON, SUSAN E.; Atkinson, Nigel S.
Currently, measuring ethanol behaviors in flies depends on expensive image analysis software or time intensive experimenter observation. We have designed an automated system for the collection and analysis of locomotor behavior data, using the IEEE 1394 acquisition program dvgrab, the image toolkit ImageMagick and the programming language Perl. In the proposed method, flies are placed in a clear container and a computer-controlled camera takes pictures at regular intervals. Digital subtractio...
Stegmann, Mikkel Bille; Skoglund, Karl
Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation. The...... presented pipeline deals with i) estimation of the mid-sagittal plane, ii) localisation and registration of the corpus callosum, iii) parameterisation and representation of its contour, and iv) means of standardising the traditional reference area measurements....
Young, Anthony; Kitts, Christopher; Neumann, Michael; Mas, Ignacio; Rasay, Mike
Beacon monitoring is an automated satellite health monitoring architecture that combines telemetry analysis, periodic low data rate message broadcasts by a spacecraft, and automated ground reception and data handling in order to implement a cost-effective anomaly detection and notification capability for spacecraft missions. Over the past two decades, this architecture has been explored and prototyped for a range of spacecraft mission classes to include use on NASA deep space probes, military...
Abar, Sameera; Kinoshita, Tetsuo
This paper presents a domain-ontology driven multi-agent based scheme for representing the knowledge of the communication network management system. In the proposed knowledge-intensive framework, the static domain-related concepts are articulated as the domain knowledge ontology. The experiential knowledge for managing the network is represented as the fault-case reasoning models, and it is explicitly encoded as the core knowledge of multi-agent middleware layer as heuristic production-type rules. These task-oriented management expertise manipulates the domain content and structure during the diagnostic sessions. The agents' rules along with the embedded generic java-based problem-solving algorithms and run-time log information, perform the automated management tasks. For the proof of concept, an experimental network system has been implemented in our laboratory, and the deployment of some test-bed scenarios is performed. Experimental results confirm a marked reduction in the management-overhead of the network administrator, as compared to the manual network management techniques, in terms of the time-taken and effort-done during a particular fault-diagnosis session. Validation of the reusability/modifiability aspects of our system, illustrates the flexible manipulation of the knowledge fragments within diverse application contexts. The proposed approach can be regarded as one of the pioneered steps towards representing the network knowledge via reusable domain ontology and intelligent agents for the automated network management support systems.
Al-Shaer, Ehab; Xie, Geoffrey
In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen
Wolfe, Carrie A. C.; Oates, Matthew R.; Hage, David S.
The technique of flow injection analysis (FIA) is a common instrumental method used in detecting a variety of chemical and biological agents. This paper describes an undergraduate laboratory that uses FIA to perform a bicinchoninic acid (BCA) colorimetric assay for quantitating protein samples. The method requires less than 2 min per sample injection and gives a response over a broad range of protein concentrations. This method can be used in instrumental analysis labs to illustrate the principles and use of FIA, or as a means for introducing students to common methods employed in the analysis of biological agents.
Johnston, Mark D.; Clement, Bradley
The Deep Space Network (DSN) is a central part of NASA's infrastructure for communicating with active space missions, from earth orbit to beyond the solar system. We describe our recent work in modeling the complexities of user requirements, and then scheduling and resolving conflicts on that basis. We emphasize our innovative use of background 'intelligent' assistants' that carry out search asynchrnously while the user is focusing on various aspects of the schedule.
José Luis Neves; Eby G. Friedman
In this paper a top-down methodology is presented for synthesizing clock distribution networks based on application-dependent localized clock skew. The methodology is divided into four phases: 1) determination of an optimal clock skew schedule for improving circuit performance and reliability; 2) design of the topology of the clock tree based on the circuit hierarchy and minimum clock path delays; 3) design of circuit structures to implement the delay values associated with the branches of th...
Aghaeepour, Nima; Finak, Greg; Hoos, Holger; Mosmann, Tim R.; Gottardo, Raphael; Brinkman, Ryan; Scheuermann, Richard H.
Traditional methods for flow cytometry (FCM) data processing rely on subjective manual gating. Recently, several groups have developed computational methods for identifying cell populations in multidimensional FCM data. The Flow Cytometry: Critical Assessment of Population Identification Methods (FlowCAP) challenges were established to compare the performance of these methods on two tasks – mammalian cell population identification to determine if automated algorithms can reproduce expert manual gating, and sample classification to determine if analysis pipelines can identify characteristics that correlate with external variables (e.g., clinical outcome). This analysis presents the results of the first of these challenges. Several methods performed well compared to manual gating or external variables using statistical performance measures, suggesting that automated methods have reached a sufficient level of maturity and accuracy for reliable use in FCM data analysis. PMID:23396282
Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.
Wang, Dan-Ling; Yu, Zu-Guo; Van Anh, Vo
Complex networks have recently attracted much attention in diverse areas of science and technology. Many networks such as the WWW and biological networks are known to display spatial heterogeneity which can be characterized by their fractal dimensions. Multifractal analysis is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we introduce a new box covering algorithm for multifractal analysis of complex netw...
Kalb, Jeffrey L.; Lee, David S.
Emerging high-bandwidth, low-latency network technology has made network-based architectures both feasible and potentially desirable for use in satellite payload architectures. The selection of network topology is a critical component when developing these multi-node or multi-point architectures. This study examines network topologies and their effect on overall network performance. Numerous topologies were reviewed against a number of performance, reliability, and cost metrics. This document identifies a handful of good network topologies for satellite applications and the metrics used to justify them as such. Since often multiple topologies will meet the requirements of the satellite payload architecture under development, the choice of network topology is not easy, and in the end the choice of topology is influenced by both the design characteristics and requirements of the overall system and the experience of the developer.
Nuclear engineering analysis is automated with the help of preprocessors and postprocessors. All the analysis and processing steps are recorded in a form that is reportable and replayable. These recordings serve both as documentations and as robots, for they are capable of performing the analyses they document. Since the processors and robots in ROBOCOM interface the users in a way independent of the analysis program being used, it is now possible to unify input modeling for programs with similar functionality. ROBOCOM will eventually evolve into an encyclopedia of how every nuclear engineering analysis is performed
Full Text Available Purpose: Improvement of technological processes by the use of technological efficiency analysis can create basis of their optimization. Informatization and computerization of wider and wider scope of activity is one of the most important current development trends of an enterprise.Design/methodology/approach: Indicators appointment makes it possible to evaluate the process efficiency, which can constitute an optimization basis of particular operation. Model of technological efficiency analysis is based on particular efficiency indicators that characterize operation, taking into account following criteria: operation – material, operation – machine, operation – human, operation – technological parameters.Findings: From the qualitative and correctness of choose of technology point of view comprehensive technological processes assessment makes up the basis of technological efficiency analysis. Results of technological efficiency analysis of technological process of prove that the chosen model of technological efficiency analysis makes it possible to improve the process continuously by the technological analysis, and application of computer assistance makes it possible to automate the process of efficiency analysis, and finally controlled improvement of technological processes.Practical implications: For the sake of complexity of technological efficiency analysis one has created an AEPT computer analysis from which result: operation efficiency indicators with distinguished indicators with minimal acceptable values, values of efficiency of the applied samples, value of technological process efficiency.Originality/value: The created computer analysis of ef technological process efficiency (AEPT makes it possible to automate the process of analysis and optimization.
The Nuclear Operations Project Services identified the need to improve manual tank farm surveillance data collection, review, distribution and storage practices often referred to as Operator Rounds. This document provides the analysis in terms of feasibility to improve the manual data collection methods by using handheld computer units, barcode technology, a database for storage and acquisitions, associated software, and operational procedures to increase the efficiency of Operator Rounds associated with surveillance activities
The Nuclear Operations Project Services identified the need to improve manual tank farm surveillance data collection, review, distribution and storage practices often referred to as Operator Rounds. This document provides the analysis in terms of feasibility to improve the manual data collection methods by using handheld computer units, barcode technology, a database for storage and acquisitions, associated software, and operational procedures to increase the efficiency of Operator Rounds associated with surveillance activities.
A Microphotometer is used to increase the sharpness of dark spectral lines. Analyzing these lines one sample content and its concentration could be determined and the analysis is known as Quantitative Spectrographic Analysis. The Quantitative Spectrographic Analysis is carried out in 3 steps, as follows. 1. Emulsion calibration. This consists of gauging a photographic emulsion, to determine the intensity variations in terms of the incident radiation. For the procedure of emulsion calibration an adjustment with square minimum to the data obtained is applied to obtain a graph. It is possible to determine the density of dark spectral line against the incident light intensity shown by the microphotometer. 2. Working curves. The values of known concentration of an element against incident light intensity are plotted. Since the sample contains several elements, it is necessary to find a work curve for each one of them. 3. Analytical results. The calibration curve and working curves are compared and the concentration of the studied element is determined. The automatic data acquisition, calculation and obtaining of resulting, is done by means of a computer (PC) and a computer program. The conditioning signal circuits have the function of delivering TTL levels (Transistor Transistor Logic) to make the communication between the microphotometer and the computer possible. Data calculation is done using a computer programm
The National Ignition Facility (NIF) is a high-energy laser facility comprised of 192 beamlines that house thousands of optics. These optics guide, amplify and tightly focus light onto a tiny target for fusion ignition research and high energy density physics experiments. The condition of these optics is key to the economic, efficient and maximally energetic performance of the laser. Our goal, and novel achievement, is to find on the optics any imperfections while they are tens of microns in size, track them through time to see if they grow and if so, remove the optic and repair the single site so the entire optic can then be re-installed for further use on the laser. This paper gives an overview of the image analysis used for detecting, measuring, and tracking sites of interest on an optic while it is installed on the beamline via in situ inspection and after it has been removed for maintenance. In this way, the condition of each optic is monitored throughout the optic's lifetime. This overview paper will summarize key algorithms and technical developments for custom image analysis and processing and highlight recent improvements. (Associated papers will include more details on these issues.) We will also discuss the use of OI Analysis for daily operation of the NIF laser and its extension to inspection of NIF targets.
Solberg Hjorth, Theis; Torbensen, Rune; Madsen, Per Printz
Current wireless technologies use a variety of methods to locally exchange and verify credentials between devices to establish trusted relationships. Scenarios in home automation networks also require this capability over the Internet, but the necessary involvement of non-expert users to setup...... these relationships can lead to misconfiguration or breaches of security. We outline a security system for Home Automation called Trusted Domain that can establish and maintain cryptographically secure relationships between devices connected via IP-based networks and the Internet. Trust establishment is...... with sequences of pre-defined pictograms. This method is designed to scale from smartphones and tablets down to low-resource embedded systems. The presented approach is supported by an extensive literature study, and the ease of use and feasibility of the method has been investigated via user study and...
Lee Ernest K
Full Text Available Abstract Background The availability of sequences from whole genomes to reconstruct the tree of life has the potential to enable the development of phylogenomic hypotheses in ways that have not been before possible. A significant bottleneck in the analysis of genomic-scale views of the tree of life is the time required for manual curation of genomic data into multi-gene phylogenetic matrices. Results To keep pace with the exponentially growing volume of molecular data in the genomic era, we have developed an automated technique, ASAP (Automated Simultaneous Analysis Phylogenetics, to assemble these multigene/multi species matrices and to evaluate the significance of individual genes within the context of a given phylogenetic hypothesis. Conclusion Applications of ASAP may enable scientists to re-evaluate species relationships and to develop new phylogenomic hypotheses based on genome-scale data.
Argonne National Laboratory (ANL) is actively involved in the LMFBR Man-Machine Integration (MMI) Safety Program. The objective of this program is to enhance the operational safety and reliability of fast-breeder reactors by optimum integration of men and machines through the application of human factors principles and control engineering to the design, operation, and the control environment. ANL is developing methods to apply automated reasoning and computerization in the validation and sneak function analysis process. This project provides the element definitions and relations necessary for an automated reasoner (AR) to reason about design validation and sneak function analysis. This project also provides a demonstration of this AR application on an Experimental Breeder Reactor-II (EBR-II) system, the Argonne Cooling System
Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.
Zhao, Mei-Fang; Luo, A-Li; Wu, Fu-Chao; Hu, Zhan-Yi
Recognizing and certifying quasars through the research on spectra is an important method in the field of astronomy. This paper presents a novel adaptive method for the automated recognition of quasars based on the radial basis function neural networks (RBFN). The proposed method is composed of the following three parts: (1) The feature space is reduced by the PCA (the principal component analysis) on the normalized input spectra; (2) An adaptive RBFN is constructed and trained in this reduced space. At first, the K-means clustering is used for the initialization, then based on the sum of squares errors and a gradient descent optimization technique, the number of neurons in the hidden layer is adaptively increased to improve the recognition performance; (3) The quasar spectra recognition is effectively carried out by the above trained RBFN. The author's proposed adaptive RBFN is shown to be able to not only overcome the difficulty of selecting the number of neurons in hidden layer of the traditional RBFN algorithm, but also increase the stability and accuracy of recognition of quasars. Besides, the proposed method is particularly useful for automatic voluminous spectra processing produced from a large-scale sky survey project, such as our LAMOST, due to its efficiency. PMID:16826929
Cao, Xinhua; Treves, S. Ted
In this study, an automated analysis of pulmonary ventilation (AAPV) was developed to visualize the ventilation in pediatric lungs using dynamic Xe-133 scintigraphy. AAPV is a software algorithm that converts a dynamic series of Xe- 133 images into four functional images: equilibrium, washout halftime, residual, and clearance rate by analyzing pixelbased activity. Compared to conventional methods of calculating global or regional ventilation parameters, AAPV provides a visual representation of pulmonary ventilation functions.
Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan
In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorit...
Analysis of damages done by the radiation in a polymer characterized by optic properties of polished surfaces, of uniformity and chemical resistance that the acrylic; resistant until the 150 centigrade grades of temperature, and with an approximate weight of half of the glass. An objective of this work is the development of a method that analyze in automated form the superficial damages induced by radiation in plastic materials means an images analyst. (Author)
This paper describes three successive studies on ageing of protection automation of nuclear power plants. These studies were aimed at developing a methodology for an experience based ageing analysis, and applying it to identify the most critical components from ageing and safety points of view. The analyses resulted also to suggestions for improvement of data collection systems for the purpose of further ageing analyses. (author)
Hopkins, Daniel J.; King, Gary
The increasing availability of digitized text presents enormous opportunities for social scientists. Yet hand coding many blogs, speeches, government records, newspapers, or other sources of unstructured text is infeasible. Although computer scientists have methods for automated content analysis, most are optimized to classify individual documents, whereas social scientists instead want generalizations about the population of documents, such as the proportion in a given category. Unfortunatel...
Chaplyga, V.; Nyemkova, E.; Ivanishin, S.; Shandra, Z.
The article is devoted to the administration of access rights in Role-Based model Access Control to the corporate banking networks. Four main functional roles are offered in accordance with the international standard CobiT. The problem of remote connectivity to information resources of banks in terms of security is similar to the problem BYOD. There are proposed an algorithm of the automated control of remote access. The algorithm is based on the separation of the work area in ...
Ruel, Silvain; De Smet, Olivier; Faure, Jean-Marc
Networked automation architectures with Ethernet-based fieldbuses instead of traditional fieldbuses are more and more often used in industry, even for critical systems such as chemical or nuclear power plants. The strong safety requirements of these processes impose to evaluate the time performances of these complex architectures. Formal verification techniques are promising solutions to reach this objective. Hence, this paper focuses on the applicability of formal verification techniques to ...
ABSTRACT Oulu University of Applied Sciences Degree Programme in Information Technology Author: Jeveen Shrestha Title of the bachelor’s thesis: Web Application Development for Building Automation Device (Heating System) in Local Network Supervisor: Pekka Alaluukas Term and year of completion: Spring 2016 Pages: 37 After doing a practical training in Ouman Oy in summer of 2015, I was provided a project to develop a web application that would communicate to their heati...
Full Text Available This paper proposes a four stage, denoising, feature extraction, optimization and classification method for detection of premature ventricular contractions. In the first stage, we investigate the application of wavelet denoising in noise reduction of multi-channel high resolution ECG signals. In this stage, the Stationary Wavelet Transform is used. Feature extraction module extracts ten ECG morphological features and one timing interval feature. Then a number of radial basis function (RBF neural networks with different value of spread parameter are designed and compared their ability for classification of three different classes of ECG signals. Genetic Algorithm is used to find best value of RBF parameters. A classification accuracy of 100% for training dataset and 95.66% for testing dataset and an overall accuracy of detection of 95.83% were achieved over seven files from the MIT/BIH arrhythmia database.
Hamilton, Peter W; Wang, Yinhai; Boyd, Clinton; James, Jacqueline A; Loughrey, Maurice B; Hougton, Joseph P; Boyle, David P; Kelly, Paul; Maxwell, Perry; McCleary, David; Diamond, James; McArt, Darragh G; Tunstall, Jonathon; Bankhead, Peter; Salto-Tellez, Manuel
The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p tissue samples for molecular profiling in discovery and diagnostics. PMID:26317646
Workman, Gary L.
The use of eddy current techniques for characterizing flaws in graphite-based filament-wound cylindrical structures is described. A major emphasis was also placed upon incorporating artificial intelligence techniques into the signal analysis portion of the inspection process. Developing an eddy current scanning system using a commercial robot for inspecting graphite structures (and others) was a goal in the overall concept and is essential for the final implementation for the expert systems interpretation. Manual scans, as performed in the preliminary work here, do not provide sufficiently reproducible eddy current signatures to be easily built into a real time expert system. The expert systems approach to eddy current signal analysis requires that a suitable knowledge base exist in which correct decisions as to the nature of a flaw can be performed. A robotic workcell using eddy current transducers for the inspection of carbon filament materials with improved sensitivity was developed. Improved coupling efficiencies achieved with the E-probes and horseshoe probes are exceptional for graphite fibers. The eddy current supervisory system and expert system was partially developed on a MacIvory system. Continued utilization of finite element models for predetermining eddy current signals was shown to be useful in this work, both for understanding how electromagnetic fields interact with graphite fibers, and also for use in determining how to develop the knowledge base. Sufficient data was taken to indicate that the E-probe and the horseshoe probe can be useful eddy current transducers for inspecting graphite fiber components. The lacking component at this time is a large enough probe to have sensitivity in both the far and near field of a thick graphite epoxy component.
Hong SeungHo; Li XiaoHui; Fang KangLing
The use of wireless sensor networks in home automation (WSNHA) is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route disco...
Full Text Available The tourism industry is becoming one of the world's largest economical sources, and is expected to become the world's first industry by 2020. Previous studies have focused on several aspects of this industry including sociology, geography, tourism management and development, but have paid less attention to analytical and quantitative approaches. This study introduces some network analysis techniques and measures aiming at studying the structural characteristics of tourism networks. More specifically, it presents a methodology to analyze tourism destinations network. We apply the methodology to analyze mazandaran’s Tourism destination network, one of the most famous tourism areas of Iran.
Fabiano de Jesus Santos
Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.
Arenas, Alex; Danon, Leon; Diaz-Guilera, Albert; Gleiser, Pablo M.; Guimera, Roger
We present an empirical study of different social networks obtained from digital repositories. Our analysis reveals the community structure and provides a useful visualising technique. We investigate the scaling properties of the community size distribution, and that find all the networks exhibit power law scaling in the community size distributions with exponent either -0.5 or -1. Finally we find that the networks' community structure is topologically self-similar using the Horton-Strahler i...
Naim, A; Sodré, L; Storrie-Lombardi, M C; Naim, A; Lahav, O; Sodre, L; Storrie-Lombardi, M C
We train Artificial Neural Networks to classify galaxies based solely on the morphology of the galaxy images as they appear on blue survey plates. The images are reduced and morphological features such as bulge size and the number of arms are extracted, all in a fully automated manner. The galaxy sample was first classified by 6 independent experts. We use several definitions for the mean type of each galaxy, based on those classifications. We then train and test the network on these features. We find that the rms error of the network classifications, as compared with the mean types of the expert classifications, is 1.8 Revised Hubble Types. This is comparable to the overall rms dispersion between the experts. This result is robust and almost completely independent of the network architecture used.
Full Text Available In recent years, with the popularity of computers and smart phones and the development of intelligent building in electronics industry, people’s requirement of living environment is gradually changing. The intelligent home furnishing building has become the new focus of people purchasing. And the networked home automation system which relies on the advanced network technology to connect with air conditioning, lighting, security, curtains, TV, water heater and other home furnishing systems into a local area network becomes a networked control system. μC /OS is a real-time operating system with the free open-source code, the compact structure and the preemptive real-time kernel. In this paper, the author focuses on the design of home furnishing total controller based on AMAZON multimedia processor and μC/OS-II real-time operating system, and achieves the remote access connection and control through the Ethernet.
Full Text Available In this paper the availabilities of some network topologies are analyzed and calculated. Software based on exact All-terminal Graph reduction Algorithm is developed and applied. Results, conclusions and solutions are presented, demonstrated and discussed.
Magradze, Erekle; Nadal, Jordi; Quadt, Arnulf; Kawamura, Gen; Musheghyan, Haykuhi
High-throughput computing platforms consist of a complex infrastructure and provide a number of services apt to failures. To mitigate the impact of failures on the quality of the provided services, a constant monitoring and in time reaction is required, which is impossible without automation of the system administration processes. This paper introduces a way of automation of the process of monitoring information analysis to provide the long and short term predictions of the service response time (SRT) for a mass storage and batch systems and to identify the status of a service at a given time. The approach for the SRT predictions is based on Adaptive Neuro Fuzzy Inference System (ANFIS). An evaluation of the approaches is performed on real monitoring data from the WLCG Tier 2 center GoeGrid. Ten fold cross validation results demonstrate high efficiency of both approaches in comparison to known methods.
Full Text Available In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric—the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment.
Lee, Woojin; Kim, Juil; Kang, JangMook
In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric-the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment. PMID:22163678
Full Text Available The associative memory feature of the Hopfield type recurrent neural network is used for the pattern storage and pattern authentication. This paper outlines an optimization relaxation approach for signature verification based on the Hopfield neural network (HNN which is a recurrent network. The standard sample signature of the customer is cross matched with the one supplied on the Cheque. The difference percentage is obtained by calculating the different pixels in both the images. The network topology is built so that each pixel in the difference image is a neuron in the network. Each neuron is categorized by its states, which in turn signifies that if the particular pixel is changed. The network converges to unwavering condition based on the energy function which is derived in experiments. The Hopfield’s model allows each node to take on two binary state values (changed/unchanged for each pixel. The performance of the proposed technique is evaluated by applying it in various binary and gray scale images. This paper contributes in finding an automated scheme for verification of authentic signature on bank Cheques. The derived energy function allows a trade-off between the influence of its neighborhood and its own criterion. This device is able to recall as well as complete partially specified inputs. The network is trained via a storage prescription that forces stable states to correspond to (local minima of a network “energy” function.
This is the Handbook of Network Analysis, the companion article to the KONECT (Koblenz Network Collection) project. This project is intended to collect network datasets, analyse them systematically, and provide both datasets and the underlying network analysis code to researchers. This article outlines the project, gives all definitions used within the project, reviews all network statistics used, reviews all network plots used, and gives a brief overview of the API used by KONECT.
Cherifi1, Chantal; Santucci, Jean-François
Along with a continuously growing number of publicly available Web services (WS), we are witnessing a rapid development in semantic-related web technologies, which lead to the apparition of semantically described WS. In this work, we perform a comparative analysis of the syntactic and semantic approaches used to describe WS, from a complex network perspective. First, we extract syntactic and semantic WS dependency networks from a collection of publicly available WS descriptions. Then, we take advantage of tools from the complex network field to analyze them and determine their topological properties. We show WS dependency networks exhibit some of the typical characteristics observed in real-world networks, such as small world and scale free properties, as well as community structure. By comparing syntactic and semantic networks through their topological properties, we show the introduction of semantics in WS description allows modeling more accurately the dependencies between parameters, which in turn could l...
To be productive and profitable in a modern semiconductor fabrication environment, large amounts of manufacturing data must be collected, analyzed, and maintained. This includes data collected from in- and off-line wafer inspection systems and from the process equipment itself. This data is increasingly being used to design new processes, control and maintain tools, and to provide the information needed for rapid yield learning and prediction. Because of increasing device complexity, the amount of data being generated is outstripping the yield engineer close-quote s ability to effectively monitor and correct unexpected trends and excursions. The 1997 SIA National Technology Roadmap for Semiconductors highlights a need to address these issues through open-quotes automated data reduction algorithms to source defects from multiple data sources and to reduce defect sourcing time.close quotes SEMATECH and the Oak Ridge National Laboratory have been developing new strategies and technologies for providing the yield engineer with higher levels of assisted data reduction for the purpose of automated yield analysis. In this article, we will discuss the current state of the art and trends in yield management automation. copyright 1999 American Vacuum Society
The Highway Electrification and Automation Technologies Regional Impacts Analysis Project addresses the transportation-related problems of freeway congestion, air pollution, and dependence on fossil fuels in southern California. This report presents a documentation of the basis for the impacts analysis. It contains sections on data collected, baseline forecast for 2025, and electrification and automation specification scenarios. This report constitutes the final report for Phase I of the proj...
Allen, Phillip A.; Wells, Douglas N.
Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.
Full Text Available Abstract Background Microarrays are routinely used to assess mRNA transcript levels on a genome-wide scale. Large amount of microarray datasets are now available in several databases, and new experiments are constantly being performed. In spite of this fact, few and limited tools exist for quickly and easily analyzing the results. Microarray analysis can be challenging for researchers without the necessary training and it can be time-consuming for service providers with many users. Results To address these problems we have developed an automated microarray data analysis (AMDA software, which provides scientists with an easy and integrated system for the analysis of Affymetrix microarray experiments. AMDA is free and it is available as an R package. It is based on the Bioconductor project that provides a number of powerful bioinformatics and microarray analysis tools. This automated pipeline integrates different functions available in the R and Bioconductor projects with newly developed functions. AMDA covers all of the steps, performing a full data analysis, including image analysis, quality controls, normalization, selection of differentially expressed genes, clustering, correspondence analysis and functional evaluation. Finally a LaTEX document is dynamically generated depending on the performed analysis steps. The generated report contains comments and analysis results as well as the references to several files for a deeper investigation. Conclusion AMDA is freely available as an R package under the GPL license. The package as well as an example analysis report can be downloaded in the Services/Bioinformatics section of the Genopolis http://www.genopolis.it/
Presents insight into the social behaviour of animals (including the study of animal tracks and learning by members of the same species). This book provides web-based evidence of social interaction, perceptual learning, information granulation and the behaviour of humans and affinities between web-based social networks
Decker, Arthur J.; Buggele, Alvin E.
Artificial neural networks were used successfully to sequence operations in a small, recently modernized, supersonic wind tunnel at NASA-Lewis Research Center. The neural nets generated correct estimates of shadowgraph patterns, pressure sensor readings and mach numbers for conditions occurring shortly after startup and extending to fully developed flow. Artificial neural networks were trained and tested for estimating: sensor readings from shadowgraph patterns, shadowgraph patterns from shadowgraph patterns and sensor readings from sensor readings. The 3.81 by 10 in. (0.0968 by 0.254 m) tunnel was operated with its mach 2.0 nozzle, and shadowgraph was recorded near the nozzle exit. These results support the thesis that artificial neural networks can be combined with current workstation technology to automate wind tunnel operations.
Full Text Available This paper tackles the representation of routes carried by a physical network infrastructure on a map. In particular, the paper examines the case where each route is represented by a separate colored linear symbol offset from the physical network segments and from other routes—as on public transit maps with bus routes offset from roads. In this study, the objective is to automate the placement of such route symbols while maximizing their legibility, especially at junctions. The problem is modeled as a constraint optimization problem. Legibility criteria are identified and formalized as constraints to optimize, while focusing on the case of hiking routes in a physical network composed of roads and pedestrian paths. Two solving methods are tested, based on backtracking and simulated annealing meta-heuristics respectively. Encouraging results obtained on real data are presented and discussed.
Demir Sumeyra U
Full Text Available Abstract Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD
Ishihara, Kiyoomi; Sugawara, Masayuki
National Space Development Agency of Japan (NASDA) is to perform experimental operations to acquire necessary technology for the future inter-satellite communications configured with a data relay satellite. This paper intends to overview functions of the experimental ground system which NASDA has developed for the Engineering Test Satellite VI (ETS-VI) Data Relay and Tracking Experiment, and to introduce Space Network System Operations Procedure (SNSOP) method with an example of Ka-band Single Access (KSA) acquisition sequence. To reduce operational load, SNSOP is developed with the concept of automated control and monitor of both ground terminal and data relay satellite. To perform acquisition and tracking operations fluently, the information exchange with user spacecraft controllers is automated by SNSOP functions.
The tourism industry is becoming one of the world's largest economical sources, and is expected to become the world's first industry by 2020. Previous studies have focused on several aspects of this industry including sociology, geography, tourism management and development, but have paid less attention to analytical and quantitative approaches. This study introduces some network analysis techniques and measures aiming at studying the structural characteristics of tourism networks. More speci...
Milojko V. Jevtović
Full Text Available A topological analysis of the structure of telecommunications networks is a very interesting topic in the network research, but also a key issue in their design and planning. Satisfying multiple criteria in terms of locations of switching nodes as well as their connectivity with respect to the requests for capacity, transmission speed, reliability, availability and cost are the main research objectives. There are three ways of presenting the topology of telecommunications networks: table, matrix or graph method. The table method is suitable for a network of a relatively small number of nodes in relation to the number of links. The matrix method involves the formation of a connection matrix in which its columns present source traffic nodes and its rows are the switching systems that belong to the destination. The method of the topology graph means that the network nodes are connected via directional or unidirectional links. We can thus easily analyze the structural parameters of telecommunications networks. This paper presents the mathematical analysis of the star-, ring-, fully connected loop- and grid (matrix-shaped topology as well as the topology based on the shortest path tree. For each of these topologies, the expressions for determining the number of branches, the middle level of reliability, the medium length and the average length of the link are given in tables. For the fully connected loop network with five nodes the values of all topological parameters are calculated. Based on the topological parameters, the relationships that represent integral and distributed indicators of reliability are given in this work as well as the values of the particular network. The main objectives of the topology optimization of telecommunications networks are: achieving the minimum complexity, maximum capacity, the shortest path message transfer, the maximum speed of communication and maximum economy. The performance of telecommunications networks is
Ricci, D; Ayala, C; Ramón-Fox, F G; Michel, R; Navarro, S; Wang, S -Y; Zhang, Z -W; Lehner, M J; Nicastro, L; Reyes-Ruiz, M
A preliminary data analysis of the stellar light curves obtained by the robotic telescopes of the TAOS project is presented. We selected a data run relative to one of the stellar fields observed by three of the four TAOS telescopes, and we investigate the common trend and the correlation between the light curves. We propose two ways to remove these trends and show the preliminary results. A project aimed at flagging interesting behaviors, such as stellar variability, and to set up an automated follow-up with the San Pedro M\\'artir Facilities is on the way.
Galvagno, E.; Velardocchia, M.; Vigliani, A.
The paper presents the kinematic and dynamic analysis of a power-shift automated manual transmission (AMT) characterised by a wet clutch, called assist clutch (ACL), replacing the fifth gear synchroniser. This torque assist mechanism becomes a torque transfer path during gearshifts, in order to overcome a typical dynamic problem of the AMTs, that is the driving force interruption. The mean power contributions during gearshifts are computed for different engine and ACL interventions, thus allowing to draw considerations useful for developing the control algorithms. The simulation results prove the advantages in terms of gearshift quality and ride comfort of the analysed transmission.
Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla;
PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...... agreement between the four pathologists and the VG app was κ=0.71. CONCLUSION: In conclusion, the Visiopharm VG app is able to measure the thickness of a sub-epithelial collagenous band in colon biopsies with an accuracy comparable to the performance of a pathologist and thereby provides a promising...
Mukherjee, Amit; Jenkins, Brian; Fang, Cheng; Radke, Richard J; Banker, Gary; Roysam, Badrinath
This paper describes an automated method to profile the velocity patterns of small organelles (BDNF granules) being transported along a selected section of axon of a cultured neuron imaged by time-lapse fluorescence microscopy. Instead of directly detecting the granules as in conventional tracking, the proposed method starts by generating a two-dimensional spatio-temporal map (kymograph) of the granule traffic along an axon segment. Temporal sharpening during the kymograph creation helps to highlight granule movements while suppressing clutter due to stationary granules. A voting algorithm defined over orientation distribution functions is used to refine the locations and velocities of the granules. The refined kymograph is analyzed using an algorithm inspired from the minimum set cover framework to generate multiple motion trajectories of granule transport paths. The proposed method is computationally efficient, robust to significant levels of noise and clutter, and can be used to capture and quantify trends in transport patterns quickly and accurately. When evaluated on a collection of image sequences, the proposed method was found to detect granule movement events with 94% recall rate and 82% precision compared to a time-consuming manual analysis. Further, we present a study to evaluate the efficacy of velocity profiling by analyzing the impact of oxidative stress on granule transport in which the fully automated analysis correctly reproduced the biological conclusion generated by manual analysis. PMID:21330183
The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.
Archer, M.; Erickson, J. S.; Hilliard, L. R.; Howell, P. B., Jr.; Stenger, D. A.; Ligler, F. S.; Lin, B.
The increasing demand for portable devices to detect and identify pathogens represents an interdisciplinary effort between engineering, materials science, and molecular biology. Automation of both sample preparation and analysis is critical for performing multiplexed analyses on real world samples. This paper selects two possible components for such automated portable analyzers: modified silicon structures for use in the isolation of nucleic acids and a sheath flow system suitable for automated microflow cytometry. Any detection platform that relies on the genetic content (RNA and DNA) present in complex matrices requires careful extraction and isolation of the nucleic acids in order to ensure their integrity throughout the process. This sample pre-treatment step is commonly performed using commercially available solid phases along with various molecular biology techniques that require multiple manual steps and dedicated laboratory space. Regardless of the detection scheme, a major challenge in the integration of total analysis systems is the development of platforms compatible with current isolation techniques that will ensure the same quality of nucleic acids. Silicon is an ideal candidate for solid phase separations since it can be tailored structurally and chemically to mimic the conditions used in the laboratory. For analytical purposes, we have developed passive structures that can be used to fully ensheath one flow stream with another. As opposed to traditional flow focusing methods, our sheath flow profile is truly two dimensional, making it an ideal candidate for integration into a microfluidic flow cytometer. Such a microflow cytometer could be used to measure targets captured on either antibody- or DNA-coated beads.
Recent developments in the complex networks analysis, based largely on graph theory, have been used to study the brain network organization. The brain is a complex system that can be represented by a graph. A graph is a mathematical representation which can be useful to study the connectivity of the brain. Nodes in the brain can be identified dividing its volume in regions of interest and links can be identified calculating a measure of dependence between pairs of regions whose ac...
An automated computer analysis of ventilation (Kr-81m) and perfusion (Tc-99m) lung images has been devised that produces a graphical image of the distribution of ventilation and perfusion, and of ventilation-perfusion ratios. The analysis has overcome the following problems: the identification of the midline between two lungs and the lung boundaries, the exclusion of extrapulmonary radioactivity, the superimposition of lung images of different sizes, and the format for presentation of the data. Therefore, lung images of different sizes and shapes may be compared with each other. The analysis has been used to develop normal ranges from 55 volunteers. Comparison of younger and older age groups of men and women show small but significant differences in the distribution of ventilation and perfusion, but no differences in ventilation-perfusion ratios
Arlys Michel Lastre Aleaga
Full Text Available The load flow is of great importance in assisting the process of decision making and planning of generation, distribution and transmission of electricity. Ignorance of the values in this indicator, as well as their inappropriate prediction, difficult decision making and efficiency of the electricity service, and can cause undesirable situations such as; the on demand, overheating of the components that make up a substation, and incorrect planning processes electricity generation and distribution. Given the need for prediction of flow of electric charge of the substations in Ecuador this research proposes the concept for the development of an automated prediction system employing the use of Artificial Neural Networks.
Mohamed Adel Taher; Mostapha Abdeljawad
In this paper, the authors propose a new hybrid strategy (using artificial neural networks and hidden Markov models) for skill automation. The strategy is based on the concept of using an â€œadaptive desiredâ€ that is introduced in the paper. The authors explain how using an adaptive desired can help a system for which an explicit model is not available or is difficult to obtain to smartly cope with environmental disturbances without requiring explicit rules specification (as with fuzzy syste...
Gomez, Carles; Paradells, Josep
Urban Automation Networks (UANs) are being deployed worldwide in order to enable Smart City applications. Given the crucial role of UANs, as well as their diversity, it is critically important to assess their properties and trade-offs. This article introduces the requirements and challenges for UANs, characterizes the main current and emerging UAN paradigms, provides guidelines for their design and/or choice, and comparatively examines their performance in terms of a variety of parameters including coverage, power consumption, latency, standardization status and economic cost. PMID:26378534
Full Text Available Urban Automation Networks (UANs are being deployed worldwide in order to enable Smart City applications. Given the crucial role of UANs, as well as their diversity, it is critically important to assess their properties and trade-offs. This article introduces the requirements and challenges for UANs, characterizes the main current and emerging UAN paradigms, provides guidelines for their design and/or choice, and comparatively examines their performance in terms of a variety of parameters including coverage, power consumption, latency, standardization status and economic cost.
Rohde, John; Toftegaard, Thomas Skjødeberg
A novel tri-band antenna design for wireless sensor network devices in home automation applications is proposed. The design is based on a combination of a conventional monopole wire antenna and discrete distributed load impedances. The load impedances are employed to ensure the degrees of freedom...... necessary to obtain a simultaneous optimization of input impedance and current distribution on the antenna structure. A transmission line model is presented together with 3D finite-element simulation results and measurements on the physical antenna....
Gudkov, V.; Montealegre, V.
Generalized mutual entropy is defined for networks and applied for analysis of complex network structures. The method is tested for the case of computer simulated scale free networks, random networks, and their mixtures. The possible applications for real network analysis are discussed.
Michaël C Waumans
Full Text Available In a world where complex networks are an increasingly important part of science, it is interesting to question how the new reading of social realities they provide applies to our cultural background and in particular, popular culture. Are authors of successful novels able to reproduce social networks faithful to the ones found in reality? Is there any common trend connecting an author's oeuvre, or a genre of fiction? Such an analysis could provide new insight on how we, as a culture, perceive human interactions and consume media. The purpose of the work presented in this paper is to define the signature of a novel's story based on the topological analysis of its social network of characters. For this purpose, an automated tool was built that analyses the dialogs in novels, identifies characters and computes their relationships in a time-dependent manner in order to assess the network's evolution over the course of the story.
Waumans, Michaël C; Nicodème, Thibaut; Bersini, Hugues
In a world where complex networks are an increasingly important part of science, it is interesting to question how the new reading of social realities they provide applies to our cultural background and in particular, popular culture. Are authors of successful novels able to reproduce social networks faithful to the ones found in reality? Is there any common trend connecting an author's oeuvre, or a genre of fiction? Such an analysis could provide new insight on how we, as a culture, perceive human interactions and consume media. The purpose of the work presented in this paper is to define the signature of a novel's story based on the topological analysis of its social network of characters. For this purpose, an automated tool was built that analyses the dialogs in novels, identifies characters and computes their relationships in a time-dependent manner in order to assess the network's evolution over the course of the story. PMID:26039072
The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica ted throughout the text. However, they are not explored in de tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be havior of neurons or neuron pools. In this respect the essay is writt...
Christopher Lee Dembia
Full Text Available A simple automated image analysis algorithm has been developed that processes consecutive images from high speed, high resolution digital images of burning fuel droplets. The droplets burn under conditions that promote spherical symmetry. The algorithm performs the tasks of edge detection of the droplet’s boundary using a grayscale intensity threshold, and shape fitting either a circle or ellipse to the droplet’s boundary. The results are compared to manual measurements of droplet diameters done with commercial software. Results show that it is possible to automate data analysis for consecutive droplet burning images even in the presence of a significant amount of noise from soot formation. An adaptive grayscale intensity threshold provides the ability to extract droplet diameters for the wide range of noise encountered. In instances where soot blocks portions of the droplet, the algorithm manages to provide accurate measurements if a circle fit is used instead of an ellipse fit, as an ellipse can be too accommodating to the disturbance.
Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.
Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and
The objective was to develop tools for automating the identification of bony structures, to assess the reliability of this technique against manual raters, and to validate the resulting regions of interest against physical surface scans obtained from the same specimen. Artificial intelligence-based algorithms have been used for image segmentation, specifically artificial neural networks (ANNs). For this study, an ANN was created and trained to identify the phalanges of the human hand. The relative overlap between the ANN and a manual tracer was 0.87, 0.82, and 0.76, for the proximal, middle, and distal index phalanx bones respectively. Compared with the physical surface scans, the ANN-generated surface representations differed on average by 0.35 mm, 0.29 mm, and 0.40 mm for the proximal, middle, and distal phalanges respectively. Furthermore, the ANN proved to segment the structures in less than one-tenth of the time required by a manual rater. The ANN has proven to be a reliable and valid means of segmenting the phalanx bones from CT images. Employing automated methods such as the ANN for segmentation, eliminates the likelihood of rater drift and inter-rater variability. Automated methods also decrease the amount of time and manual effort required to extract the data of interest, thereby making the feasibility of patient-specific modeling a reality. (orig.)
Green, James L.
The Space Physics Analysis Network, or SPAN, is emerging as a viable method for solving an immediate communication problem for space and Earth scientists and has been operational for nearly 7 years. SPAN and its extension into Europe, utilizes computer-to-computer communications allowing mail, binary and text file transfer, and remote logon capability to over 1000 space science computer systems. The network has been used to successfully transfer real-time data to remote researchers for rapid data analysis but its primary function is for non-real-time applications. One of the major advantages for using SPAN is its spacecraft mission independence. Space science researchers using SPAN are located in universities, industries and government institutions all across the United States and Europe. These researchers are in such fields as magnetospheric physics, astrophysics, ionosperic physics, atmospheric physics, climatology, meteorology, oceanography, planetary physics and solar physics. SPAN users have access to space and Earth science data bases, mission planning and information systems, and computational facilities for the purposes of facilitating correlative space data exchange, data analysis and space research. For example, the National Space Science Data Center (NSSDC), which manages the network, is providing facilities on SPAN such as the Network Information Center (SPAN NIC). SPAN has interconnections with several national and international networks such as HEPNET and TEXNET forming a transparent DECnet network. The combined total number of computers now reachable over these combined networks is about 2000. In addition, SPAN supports full function capabilities over the international public packet switched networks (e.g. TELENET) and has mail gateways to ARPANET, BITNET and JANET.
Summers, Derek; Chen, Gong; Reese, Bryan; Hutchinson, Trent; Liesching, Marcus; Ying, Hai; Dover, Russell
To minimize potential wafer yield loss due to mask defects, most wafer fabs implement some form of reticle inspection system to monitor photomask quality in high-volume wafer manufacturing environments. Traditionally, experienced operators review reticle defects found by an inspection tool and then manually classify each defect as 'pass, warn, or fail' based on its size and location. However, in the event reticle defects are suspected of causing repeating wafer defects on a completed wafer, potential defects on all associated reticles must be manually searched on a layer-by-layer basis in an effort to identify the reticle responsible for the wafer yield loss. This 'problem reticle' search process is a very tedious and time-consuming task and may cause extended manufacturing line-down situations. Often times, Process Engineers and other team members need to manually investigate several reticle inspection reports to determine if yield loss can be tied to a specific layer. Because of the very nature of this detailed work, calculation errors may occur resulting in an incorrect root cause analysis effort. These delays waste valuable resources that could be spent working on other more productive activities. This paper examines an automated software solution for converting KLA-Tencor reticle inspection defect maps into a format compatible with KLA-Tencor's Klarity Defect(R) data analysis database. The objective is to use the graphical charting capabilities of Klarity Defect to reveal a clearer understanding of defect trends for individual reticle layers or entire mask sets. Automated analysis features include reticle defect count trend analysis and potentially stacking reticle defect maps for signature analysis against wafer inspection defect data. Other possible benefits include optimizing reticle inspection sample plans in an effort to support "lean manufacturing" initiatives for wafer fabs.
Frances Bernadette C. De Ocampo
Full Text Available Signature-based Intrusion Detection System (IDS helps in maintaining the integrity of data in a network controlled environment. Unfortunately, this type of IDS depends on predetermined intrusion patterns that are manually created. If the signature database of the Signature-based IDS is not updated, network attacks just pass through this type of IDS without being noticed. To avoid this, an Anomaly-based IDS is used in order to countercheck if a network traffic that is not detected by Signature-based IDS is a true malicious traffic or not. In doing so, the Anomaly-based IDS might come up with several numbers of logs containing numerous network attacks which could possibly be a false positive. This is the reason why the Anomaly-based IDS is not perfect, it would readily alarm the system that a network traffic is an attack just because it is not on its baseline. In order to resolve the problem between these two IDSs, the goal is to correlate data between the logs of the Anomaly-based IDS and the packet that has been captured in order to determine if a network traffic is really malicious or not. With the supervision of a security expert, the malicious network traffic would be verified as malicious. Using machine learning, the researchers can identify which algorithm is better than the other algorithms in classifying if a certain network traffic is really malicious. Upon doing so, the creation of signatures would follow by basing the automated creation of signatures from the detected malicious traffic.
Lim, S L; Quercia, D.; Finkelstein, A.
Projects often fail because they overlook stakeholders. Unfortunately, existing stakeholder analysis tools only capture stakeholders' information, relying on experts to manually identify them. StakeSource is a web-based tool that automates stakeholder analysis. It "crowdsources" the stakeholders themselves for recommendations about other stakeholders and aggregates their answers using social network analysis.
The NET-2 Network Analysis Program is a general purpose digital computer program which solves the nonlinear time domain response and the linearized small signal frequency domain response of an arbitrary network of interconnected components. NET-2 is capable of handling a variety of components and has been applied to problems in several engineering fields, including electronic circuit design and analysis, missile flight simulation, control systems, heat flow, fluid flow, mechanical systems, structural dynamics, digital logic, communications network design, solid state device physics, fluidic systems, and nuclear vulnerability due to blast, thermal, gamma radiation, neutron damage, and EMP effects. Network components may be selected from a repertoire of built-in models or they may be constructed by the user through appropriate combinations of mathematical, empirical, and topological functions. Higher-level components may be defined by subnetworks composed of any combination of user-defined components and built-in models. The program provides a modeling capability to represent and intermix system components on many levels, e.g., from hole and electron spatial charge distributions in solid state devices through discrete and integrated electronic components to functional system blocks. NET-2 is capable of simultaneous computation in both the time and frequency domain, and has statistical and optimization capability. Network topology may be controlled as a function of the network solution. (U.S.)
Luck, Randall L.; Tjon-Fo-Sang, Robert; Mango, Laurie; Recht, Joel R.; Lin, Eunice; Knapp, James
The Pap smear is the universally accepted test used for cervical cancer screening. In the United States alone, about 50 to 70 million of these test are done annually. Every one of the tests is done manually be a cytotechnologist looking at cells on a glass slide under a microscope. This paper describes PAPNET, an automated microscope system that combines a high speed image processor and a neural network processor. The image processor performs an algorithmic primary screen of each image. The neural network performs a non-algorithmic secondary classification of candidate cells. The final output of the system is not a diagnosis. Rather it is a display screen of suspicious cells from which a decision about the status of the case can be made.
Pyne, Saumyadipta; Hu, Xinli; Wang, Kui; Rossin, Elizabeth; Lin, Tsung-I.; Maier, Lisa; Baecher-Allan, Clare; McLachlan, Geoffrey; Tamayo, Pablo; Hafler, David; de Jager, Philip; Mesirov, Jill
Flow cytometry is widely used for single cell interrogation of surface and intracellular protein expression by measuring fluorescence intensity of fluorophore-conjugated reagents. We focus on the recently developed procedure of Pyne et al. (2009, Proceedings of the National Academy of Sciences USA 106, 8519-8524) for automated high- dimensional flow cytometric analysis called FLAME (FLow analysis with Automated Multivariate Estimation). It introduced novel finite mixture models of heavy-tailed and asymmetric distributions to identify and model cell populations in a flow cytometric sample. This approach robustly addresses the complexities of flow data without the need for transformation or projection to lower dimensions. It also addresses the critical task of matching cell populations across samples that enables downstream analysis. It thus facilitates application of flow cytometry to new biological and clinical problems. To facilitate pipelining with standard bioinformatic applications such as high-dimensional visualization, subject classification or outcome prediction, FLAME has been incorporated with the GenePattern package of the Broad Institute. Thereby analysis of flow data can be approached similarly as other genomic platforms. We also consider some new work that proposes a rigorous and robust solution to the registration problem by a multi-level approach that allows us to model and register cell populations simultaneously across a cohort of high-dimensional flow samples. This new approach is called JCM (Joint Clustering and Matching). It enables direct and rigorous comparisons across different time points or phenotypes in a complex biological study as well as for classification of new patient samples in a more clinical setting.
Imani, Elaheh; Pourreza, Hamid-Reza; Banaee, Touka
Diabetic retinopathy is the major cause of blindness in the world. It has been shown that early diagnosis can play a major role in prevention of visual loss and blindness. This diagnosis can be made through regular screening and timely treatment. Besides, automation of this process can significantly reduce the work of ophthalmologists and alleviate inter and intra observer variability. This paper provides a fully automated diabetic retinopathy screening system with the ability of retinal image quality assessment. The novelty of the proposed method lies in the use of Morphological Component Analysis (MCA) algorithm to discriminate between normal and pathological retinal structures. To this end, first a pre-screening algorithm is used to assess the quality of retinal images. If the quality of the image is not satisfactory, it is examined by an ophthalmologist and must be recaptured if necessary. Otherwise, the image is processed for diabetic retinopathy detection. In this stage, normal and pathological structures of the retinal image are separated by MCA algorithm. Finally, the normal and abnormal retinal images are distinguished by statistical features of the retinal lesions. Our proposed system achieved 92.01% sensitivity and 95.45% specificity on the Messidor dataset which is a remarkable result in comparison with previous work. PMID:25863517
Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S
There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773