Artificial Intelligence Techniques in Prolog introduces the reader to the use of well-established algorithmic techniques in the field of artificial intelligence (AI), with Prolog as the implementation language. The techniques considered cover general areas such as search, rule-based systems, and truth maintenance, as well as constraint satisfaction and uncertainty management. Specific application domains such as temporal reasoning, machine learning, and natural language are also discussed.Comprised of 10 chapters, this book begins with an overview of Prolog, paying particular attention to Prol
Develops insights into solving complex problems in engineering, biomedical sciences, social science and economics based on artificial intelligence. Some of the problems studied are in interstate conflict, credit scoring, breast cancer diagnosis, condition monitoring, wine testing, image processing and optical character recognition. The author discusses and applies the concept of flexibly-bounded rationality which prescribes that the bounds in Nobel Laureate Herbert Simon's bounded rationality theory are flexible due to advanced signal processing techniques, Moore's Law and artificial intellig
Full Text Available PID controllers are widely used in industry these days due to their useful properties such as simple tuning or robustness. While they are applicable to many control problems, they can perform poorly in some applications. Highly nonlinear system control with constrained manipulated variable can be mentioned as an example. The point of the paper is to string together convenient qualities of conventional PID control and progressive techniques based on Artificial Intelligence. Proposed control method should deal with even highly nonlinear systems. To be more specific, there is described new method of discrete PID controller tuning in this paper. This method tunes discrete PID controller parameters online through the use of genetic algorithm and neural model of controlled system in order to control successfully even highly nonlinear systems. After method description and some discussion, there is performed control simulation and comparison to one chosen conventional control method.
Sonnenwald, Diane H.
A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....
Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.
The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.
Hunt, Earl B
Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet
Irina Maria Terfaloaga
Full Text Available A frequent problem in numerical analysis is solving the systems of equations. That problem has generated in time a great interest among mathematicians and computer scientists, as evidenced by the large number of numerical methods developed. Besides the classical numerical methods, in the last years were proposed methods inspired by techniques from artificial intelligence. Hybrid methods have been also proposed along the time [15, 19]. The goal of this study is to make a survey of methods inspired from artificial intelligence for solving systems of equations
Waltz, David L.
Describes kinds of results achieved by computer programs in artificial intelligence. Topics discussed include heuristic searches, artificial intelligence/psychology, planning program, backward chaining, learning (focusing on Winograd's blocks to explore learning strategies), concept learning, constraint propagation, language understanding…
Mishra, D.; Goyal, P.
Urban air pollution forecasting has emerged as an acute problem in recent years because there are sever environmental degradation due to increase in harmful air pollutants in the ambient atmosphere. In this study, there are different types of statistical as well as artificial intelligence techniques are used for forecasting and analysis of air pollution over Delhi urban area. These techniques are principle component analysis (PCA), multiple linear regression (MLR) and artificial neural network (ANN) and the forecasting are observed in good agreement with the observed concentrations through Central Pollution Control Board (CPCB) at different locations in Delhi. But such methods suffers from disadvantages like they provide limited accuracy as they are unable to predict the extreme points i.e. the pollution maximum and minimum cut-offs cannot be determined using such approach. Also, such methods are inefficient approach for better output forecasting. But with the advancement in technology and research, an alternative to the above traditional methods has been proposed i.e. the coupling of statistical techniques with artificial Intelligence (AI) can be used for forecasting purposes. The coupling of PCA, ANN and fuzzy logic is used for forecasting of air pollutant over Delhi urban area. The statistical measures e.g., correlation coefficient (R), normalized mean square error (NMSE), fractional bias (FB) and index of agreement (IOA) of the proposed model are observed in better agreement with the all other models. Hence, the coupling of statistical and artificial intelligence can be use for the forecasting of air pollutant over urban area.
Bauer, Frank H. (Technical Monitor); Dufrene, Warren R., Jr.
This paper describes the development of an application of Artificial Intelligence for Unmanned Aerial Vehicle (UAV) control. The project was done as part of the requirements for a class in Artificial Intelligence (AI) at Nova southeastern University and as an adjunct to a project at NASA Goddard Space Flight Center's Wallops Flight Facility for a resilient, robust, and intelligent UAV flight control system. A method is outlined which allows a base level application for applying an AI method, Fuzzy Logic, to aspects of Control Logic for UAV flight. One element of UAV flight, automated altitude hold, has been implemented and preliminary results displayed. A low cost approach was taken using freeware, gnu, software, and demo programs. The focus of this research has been to outline some of the AI techniques used for UAV flight control and discuss some of the tools used to apply AI techniques. The intent is to succeed with the implementation of applying AI techniques to actually control different aspects of the flight of an UAV.
Henke, Andrea L.; Stottler, Richard H.
Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.
Manna, Claudio; Nanni, Loris; Lumini, Alessandra; Pappalardo, Sebastiana
One of the most relevant aspects in assisted reproduction technology is the possibility of characterizing and identifying the most viable oocytes or embryos. In most cases, embryologists select them by visual examination and their evaluation is totally subjective. Recently, due to the rapid growth in the capacity to extract texture descriptors from a given image, a growing interest has been shown in the use of artificial intelligence methods for embryo or oocyte scoring/selection in IVF programmes. This work concentrates the efforts on the possible prediction of the quality of embryos and oocytes in order to improve the performance of assisted reproduction technology, starting from their images. The artificial intelligence system proposed in this work is based on a set of Levenberg-Marquardt neural networks trained using textural descriptors (the local binary patterns). The proposed system was tested on two data sets of 269 oocytes and 269 corresponding embryos from 104 women and compared with other machine learning methods already proposed in the past for similar classification problems. Although the results are only preliminary, they show an interesting classification performance. This technique may be of particular interest in those countries where legislation restricts embryo selection. One of the most relevant aspects in assisted reproduction technology is the possibility of characterizing and identifying the most viable oocytes or embryos. In most cases, embryologists select them by visual examination and their evaluation is totally subjective. Recently, due to the rapid growth in our capacity to extract texture descriptors from a given image, a growing interest has been shown in the use of artificial intelligence methods for embryo or oocyte scoring/selection in IVF programmes. In this work, we concentrate our efforts on the possible prediction of the quality of embryos and oocytes in order to improve the performance of assisted reproduction technology
Ennals, J R
Artificial Intelligence: State of the Art Report is a two-part report consisting of the invited papers and the analysis. The editor first gives an introduction to the invited papers before presenting each paper and the analysis, and then concludes with the list of references related to the study. The invited papers explore the various aspects of artificial intelligence. The analysis part assesses the major advances in artificial intelligence and provides a balanced analysis of the state of the art in this field. The Bibliography compiles the most important published material on the subject of
Full Text Available The primary focus of this study is implementation of Artificial Intelligence (AI technique for developing an inverse kinematics solution for the Raven-IITM surgical research robot . First, the kinematic model of the Raven-IITM robot was analysed along with the proposed analytical solution  for inverse kinematics problem. Next, The Artificial Neural Network (ANN techniques was implemented. The training data for the same was careful selected by keeping manipulability constraints in mind. Finally, the results were verified using elliptical trajectories. The originally proposed analytical solution was found to be computationally inefficient, gave multiple solutions and its existence necessitates the use of the Standard Raven-IITM Tool . The solution devised using ANN technique gave a single solution which was thirteen times faster than the original solution. Moreover, it is generic in nature and can be used for any type of tool. Thus, a novel solution for solving the inverse kinematics problem of the Raven-II surgical robot was formulated and confirmed.
if AI is outside your field, or you know something of the subject and would like to know more then Artificial Intelligence: The Basics is a brilliant primer.' - Nick Smith, Engineering and Technology Magazine November 2011 Artificial Intelligence: The Basics is a concise and cutting-edge introduction to the fast moving world of AI. The author Kevin Warwick, a pioneer in the field, examines issues of what it means to be man or machine and looks at advances in robotics which have blurred the boundaries. Topics covered include: how intelligence can be defined whether machines can 'think' sensory
Mahmoud H. Elkazaz
Full Text Available Future smart grids will require an observable, controllable and flexible network architecture for reliable and efficient energy delivery. The use of artificial intelligence and advanced communication technologies is essential in building a fully automated system. This paper introduces a new technique for online optimal operation of distributed generation (DG resources, i.e. a hybrid fuel cell (FC and photovoltaic (PV system for residential applications. The proposed technique aims to minimize the total daily operating cost of a group of residential homes by managing the operation of embedded DG units remotely from a control centre. The target is formed as an objective function that is solved using genetic algorithm (GA optimization technique. The optimal settings of the DG units obtained from the optimization process are sent to each DG unit through a fully automated system. The results show that the proposed technique succeeded in defining the optimal operating points of the DGs that affect directly the total operating cost of the entire system.
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
In this study, an artificial intelligence technique is proposed to be implemented in the parameter tuning of a PVD process. Due to its previous adaptation in similar optimization problems, genetic algorithm (GA) is selected to optimize the parameter tuning of the RF magnetron sputtering process. The most optimized parameter combination obtained from GA's optimization result is expected to produce the desirable zinc oxide (ZnO) thin film from the sputtering process. The parameters involved in this study were RF power, deposition time and substrate temperature. The algorithm was tested to optimize the 25 datasets of parameter combinations. The results from the computational experiment were then compared with the actual result from the laboratory experiment. Based on the comparison, GA had shown that the algorithm was reliable to optimize the parameter combination before the parameter tuning could be done to the RF magnetron sputtering machine. In order to verify the result of GA, the algorithm was also been compared to other well known optimization algorithms, which were, particle swarm optimization (PSO) and gravitational search algorithm (GSA). The results had shown that GA was reliable in solving this RF magnetron sputtering process parameter tuning problem. GA had shown better accuracy in the optimization based on the fitness evaluation.
Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.
The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.
Lawrence, David R; Palacios-González, César; Harris, John
It seems natural to think that the same prudential and ethical reasons for mutual respect and tolerance that one has vis-à-vis other human persons would hold toward newly encountered paradigmatic but nonhuman biological persons. One also tends to think that they would have similar reasons for treating we humans as creatures that count morally in our own right. This line of thought transcends biological boundaries-namely, with regard to artificially (super)intelligent persons-but is this a safe assumption? The issue concerns ultimate moral significance: the significance possessed by human persons, persons from other planets, and hypothetical nonorganic persons in the form of artificial intelligence (AI). This article investigates why our possible relations to AI persons could be more complicated than they first might appear, given that they might possess a radically different nature to us, to the point that civilized or peaceful coexistence in a determinate geographical space could be impossible to achieve.
Ramesh, A. N.; Kambhampati, C.; Monson, J. R. T.; Drew, P. J.
INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications. RESULTS: The proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings. DISCUSSION: Artificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting. PMID:15333167
Intrusion detection system (IDS) is regarded as the second line of defense against network anomalies and threats. IDS plays an important role in network security. There are many techniques which are used to design IDSs for specific scenario and applications. Artificial intelligence techniques are widely used for threats detection. This paper presents a critical study on genetic algorithm, artificial immune, and artificial neural network (ANN) based IDSs techniques used in wireless sensor netw...
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222
Saracoglu, Ömer Galip; Altural, Hayriye
A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.
The discipline of Artificial Intelligence, in its quest for machine intelligence, showed great promise as long as its areas of application were limited to problems of a scientific and situation neutral nature. The attempts to move beyond these problems to a full simulation of man's intelligence has faltered and slowed it progress, largely because of the inability of Artificial Intelligence to deal with human characteristic, such as feelings, goals, and desires. This dissertation takes the position that an impasse has resulted because Artificial Intelligence has never been properly defined as a science: its objects and methods have never been identified. The following study undertakes to provide such a definition, i.e., the required ground for Artificial Intelligence. The procedure and methods employed in this study are based on Heidegger's philosophy and techniques of analysis as developed in Being and Time. Results of this study show that both the discipline of Artificial Intelligence and the concerns of Heidegger in Being and Time have the same object; fundamental ontology. The application of Heidegger's conclusions concerning fundamental ontology unites the various aspects of Artificial Intelligence and provides the articulation which shows the parts of this discipline and how they are related.
Full Text Available The use of Artificial Intelligence methods is becoming increasingly common in the modeling and forecasting of hydrological and water resource processes. In this study, applicability of Adaptive Neuro Fuzzy Inference System (ANFIS and Artificial Neural Network (ANN methods, Generalized Regression Neural Networks (GRNN and Feed Forward Neural Networks (FFNN, and Auto-Regressive (AR models for forecasting of daily river flow is investigated and Seyhan River and Cine River was chosen as case study area. For the Seyhan River, the forecasting models are established using combinations of antecedent daily river flow records. On the other hand, for the Cine River, daily river flow and rainfall records are used in input layer. For both stations, the data sets are divided into three subsets, training, testing and verification data set. The river flow forecasting models having various input structures are trained and tested to investigate the applicability of ANFIS and ANN and AR methods. The results of all models for both training and testing are evaluated and the best fit input structures and methods for both stations are determined according to criteria of performance evaluation. Moreover the best fit forecasting models are also verified by verification set which was not used in training and testing processes and compared according to criteria. The results demonstrate that ANFIS model is superior to the GRNN and FFNN forecasting models, and ANFIS can be successfully applied and provide high accuracy and reliability for daily river flow forecasting.
Discusses the foundations of artificial intelligence as a science and the types of answers that may be given to the question, "What is intelligence?" The paradigms of artificial intelligence and general systems theory are compared. (Author/VT)
Korb, Kevin B
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Full Text Available A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.
Leibbrandt, Richard; Yang, Dongqiang; Pfitzner, Darius; Powers, David; Mitchell, Pru; Hayman, Sarah; Eddy, Helen
This paper reports on a joint proof of concept project undertaken by researchers from the Flinders University Artificial Intelligence Laboratory in partnership with information managers from the Education Network Australia (edna) team at Education Services Australia to address the question of whether artificial intelligence techniques could be…
computer algorithms, there still appears to be a need for Artificial Inteligence techniques in the navigation area. The reason is that navigaion, in...RD-RI32 679 ARTIFICIAL INTELLIGENCE IN SPACE PLRTFORNSMU AIR FORCE 1/ INST OF TECH WRIGHT-PRTTERSON AFB OH SCHOOL OF ENGINEERING M A WRIGHT DEC 94...i4 Preface The purpose of this study was to analyze the feasibility of implementing Artificial Intelligence techniques to increase autonomy for
NÉSTOR DARÍO DUQUE
Full Text Available El artículo tiene como objetivo proponer un modelo de planificación para la adaptación de cursos virtuales, basado en técnicas de inteligencia artificial, en particular usando el enfoque de sistema multi-agente (SMA y métodos de planificación en inteligencia artificial. El diseño y la implementación por medio de un SMA pedagógico y la definición de un framework para especificar la estrategia de adaptación permiten incorporar diversos enfoques pedagógicos y tecnológicos, de acuerdo a los puntos de vista del equipo de trabajo, lo cual resulta en una implementación e instalación concreta. Se incorpora un novedoso pre-planificador que permite la transparencia y la neutralidad en el modelo propuesto y también ofrece soporte para traducir los elementos del curso a las especificaciones de un problema de planificación. La última sección muestra la plataforma experimental SICAD + (Sistema Inteligente de Cursos ADaptativos, a través de un enfoque multiagente, que valida el modelo propuesto.
Cutts, Dannie E.; Widgren, Brian K.
A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.
Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.
Ferreira, F.J.O. [Instituto de Engenharia Nuclear, Cidade Universitaria, Rio de Janeiro, CEP 21945-970, Caixa Postal 68550 (Brazil)], E-mail: firstname.lastname@example.org; Crispim, V.R.; Silva, A.X. [DNC/Poli, PEN COPPE CT, UFRJ Universidade Federal do Rio de Janeiro, CEP 21941-972, Caixa Postal 68509, Rio de Janeiro (Brazil)
In this study the development of a methodology to detect illicit drugs and plastic explosives is described with the objective of being applied in the realm of public security. For this end, non-destructive assay with neutrons was used and the technique applied was the real time neutron radiography together with computerized tomography. The system is endowed with automatic responses based upon the application of an artificial intelligence technique. In previous tests using real samples, the system proved capable of identifying 97% of the inspected materials.
Marvin T. Chan
Full Text Available This paper presents a car racing simulator game called Racer, in which the human player races a car against three game-controlled cars in a three-dimensional environment. The objective of the game is not to defeat the human player, but to provide the player with a challenging and enjoyable experience. To ensure that this objective can be accomplished, the game incorporates artificial intelligence (AI techniques, which enable the cars to be controlled in a manner that mimics natural driving. The paper provides a brief history of AI techniques in games, presents the use of AI techniques in contemporary video games, and discusses the AI techniques that were implemented in the development of Racer. A comparison of the AI techniques implemented in the Unity platform with traditional AI search techniques is also included in the discussion.
Enhanced, more reliable, and better understood than in the past, artificial intelligence (AI) systems can make providing healthcare more accurate, affordable, accessible, consistent, and efficient. However, AI technologies have not been as well integrated into medicine as predicted. In order to succeed, medical and computational scientists must develop hybrid systems that can effectively and efficiently integrate the experience of medical care professionals with capabilities of AI systems. After providing a general overview of artificial intelligence concepts, tools, and techniques, Medical Ap
Nilsson, Nils J
A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of th
Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara
Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.
Cheeseman, P.; Gevarter, W.
This paper presents an introductory view of Artificial Intelligence (AI). In addition to defining AI, it discusses the foundations on which it rests, research in the field, and current and potential applications.
Hasiloglu, Abdulsamet; Aras, Ömür; Bayramoglu, Mahmut
Artificial neural networks and neuro-fuzzy inference systems are well known artificial intelligence techniques used for black-box modelling of complex systems. In this study, Feed-forward artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are used for modelling the performance of direct methanol fuel cell (DMFC). Current density (I), fuel cell temperature (T), methanol concentration (C), liquid flow-rate (q) and air flow-rate (Q) are selected as input variables to predict the cell voltage. Polarization curves are obtained for 35 different operating conditions according to a statistically designed experimental plan. In modelling study, various subsets of input variables and various types of membership function are considered. A feed -forward architecture with one hidden layer is used in ANN modelling. The optimum performance is obtained with the input set (I, T, C, q) using twelve hidden neurons and sigmoidal activation function. On the other hand, first order Sugeno inference system is applied in ANFIS modelling and the optimum performance is obtained with the input set (I, T, C, q) using sixteen fuzzy rules and triangular membership function. The test results show that ANN model estimates the polarization curve of DMFC more accurately than ANFIS model.
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian
Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.
Khan, Samee; Burczy´nski, Tadeusz
One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under various types of users with evolving relationships fraught with uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems. This book presents new ideas, theories, models...
Sacha, G M; Varona, P
During the last decade there has been increasing use of artificial intelligence tools in nanotechnology research. In this paper we review some of these efforts in the context of interpreting scanning probe microscopy, the study of biological nanosystems, the classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and generally in the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial-intelligence-based applications are also discussed. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines.
Sacha, G. M.; Varona, P.
During the last decade there has been increasing use of artificial intelligence tools in nanotechnology research. In this paper we review some of these efforts in the context of interpreting scanning probe microscopy, the study of biological nanosystems, the classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and generally in the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial-intelligence-based applications are also discussed. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines.
Drigas, Athanasios S.; Argyri, Katerina; Vrettaros, John
Artificial Intelligence applications in educational field are getting more and more popular during the last decade (1999-2009) and that is why much relevant research has been conducted. In this paper, we present the most interesting attempts to apply artificial intelligence methods such as fuzzy logic, neural networks, genetic programming and hybrid approaches such as neuro - fuzzy systems and genetic programming neural networks (GPNN) in student modeling. This latest research trend is a part of every Intelligent Tutoring System and aims at generating and updating a student model in order to modify learning content to fit individual needs or to provide reliable assessment and feedback to student's answers. In this paper, we make a brief presentation of methods used to point out their qualities and then we attempt a navigation to the most representative studies sought in the decade of our interest after classifying them according to the principal aim they attempted to serve.
This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…
Beskow, Samuel; de Mello, Carlos Rogério; Vargas, Marcelle M.; Corrêa, Leonardo de L.; Caldeira, Tamara L.; Durães, Matheus F.; de Aguiar, Marilton S.
Information on stream flows is essential for water resources management. The stream flow that is equaled or exceeded 90% of the time (Q90) is one the most used low stream flow indicators in many countries, and its determination is made from the frequency analysis of stream flows considering a historical series. However, stream flow gauging network is generally not spatially sufficient to meet the necessary demands of technicians, thus the most plausible alternative is the use of hydrological regionalization. The objective of this study was to couple the artificial intelligence techniques (AI) K-means, Partitioning Around Medoids (PAM), K-harmonic means (KHM), Fuzzy C-means (FCM) and Genetic K-means (GKA), with measures of low stream flow seasonality, for verification of its potential to delineate hydrologically homogeneous regions for the regionalization of Q90. For the performance analysis of the proposed methodology, location attributes from 108 watersheds situated in southern Brazil, and attributes associated with their seasonality of low stream flows were considered in this study. It was concluded that: (i) AI techniques have the potential to delineate hydrologically homogeneous regions in the context of Q90 in the study region, especially the FCM method based on fuzzy logic, and GKA, based on genetic algorithms; (ii) the attributes related to seasonality of low stream flows added important information that increased the accuracy of the grouping; and (iii) the adjusted mathematical models have excellent performance and can be used to estimate Q90 in locations lacking monitoring.
Venkata Rami Reddy K
Full Text Available This paper presents a comprehensive review and performance investigation of Neutral Point Clamped Converter (NPCC based Unified Power Quality Conditioner (UPQC by using Artificial Intelligent (AI techniques. A Novel application of various levels of Diode Clamped Multi-Level Inverters [DCMLI] with Anti Phase Opposition and Disposition (APOD Pulse Width Modulation (PWM Scheme to Unified Power Quality Conditioner (UPQC. The Power Quality problem became a burning issues since the starting of high voltage AC transmission system. Hence, in this article it has been discussed to mitigate the PQ issues in high voltage AC systems through a three phase four wire Unified Power Quality Conditioner (UPQC under non-linear loads. The emphasised PQ problems such as voltage and current harmonics along with voltage sags and swells have also been discussed with improved performance. Also, it proposes to control the DCMLI based UPQC through conventional control schemes. Thus application of these control technique makes the system performance in par with the standards and also compared with existing system. The simulation results based on MATLAB/Simulink are discussed in detail to support the concept developed in the paper.
Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung
This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.
This paper provides a brief historical introduction to the new field of artificial intelligence and describes some applications to psychiatry. It focuses on two successful programs: a model of paranoid processes and an expert system for the pharmacological management of depressive disorders. Finally, it reviews evidence in favor of computerized psychotherapy and offers speculations on the future development of research in this area.
Gersh, Mark A.
Information on artificial intelligence research in the Air Force Systems Command is given in viewgraph form. Specific research that is being conducted at the Rome Air Development Center, the Space Technology Center, the Human Resources Laboratory, the Armstrong Aerospace Medical Research Laboratory, the Armamant Laboratory, and the Wright Research and Development Center is noted.
Since its publication, Essentials of Artificial Intelligence has beenadopted at numerous universities and colleges offering introductory AIcourses at the graduate and undergraduate levels. Based on the author'scourse at Stanford University, the book is an integrated, cohesiveintroduction to the field. The author has a fresh, entertaining writingstyle that combines clear presentations with humor and AI anecdotes. At thesame time, as an active AI researcher, he presents the materialauthoritatively and with insight that reflects a contemporary, first hand
Halff, Henry M.
Surveys artificial intelligence and the development of computer-based tutors and speculates on the future of artificial intelligence in education. Includes discussion of the definitions of knowledge, expert systems (computer systems that solve tough technical problems), intelligent tutoring systems (ITS), and specific ITSs such as GUIDON, MYCIN,…
High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.
Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier
Glaciers are the most important fresh water reservoirs storing about 67% of total fresh water. Unfortunately, they are retreating and some small glaciers have already disappeared. Thus, snow glacier melt (SGM) estimation plays an important role in water resources management. Whether SGM is estimated by complete energy balance or a simplified method, albedo is an important data present in most of the methods. However, this is a variable value depending on the ground surface and local conditions. The present research presents a new approach for estimating sub hourly albedo values using different artificial intelligence techniques such as artificial neural networks and decision trees along with measured and easy to obtain data. . The models were developed using measured data from the Zongo-Ore station located in the Bolivian tropical glacier Zongo (68°10' W, 16°15' S). This station automatically records every 30 minutes several meteorological parameters such as incoming short wave radiation, outgoing short wave radiation, temperature or relative humidity. The ANN model used was the Multi Layer Perceptron, while the decision tree used was the M5 model. Both models were trained using the WEKA software and validated using the cross validation method. After analysing the model performances, it was concluded that the decision tree models have a better performance. The model with the best performance was then validated with measured data from the Equatorian tropical glacier Antizana (78°09'W, 0°28'S). The model predicts the sub hourly albedo with an overall mean absolute error of 0.103. The highest errors occur for albedo measured values higher than 0.9. Considering that this is an extreme value coincident with low measured values of incoming short wave radiation, it is reasonable to assume that such values include errors due to censored data. Assuming a maximum albedo of 0.9 improved the accuracy of the model reducing the MAE to less than 0.1. Considering that the
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
Information about Pharmacokinetics of compounds is an essential component of drug design and development. Modeling the pharmacokinetic properties require identification of the factors effecting absorption, distribution, metabolism and excretion of compounds. There have been continuous attempts in the prediction of absorption of compounds using various Artificial intelligence methods in the effort to reduce the attrition rate of drug candidates entering to preclinical and clinical trials. Currently, there are large numbers of individual predictive models available for absorption using machine learning approaches. In current work, we are presenting a comprehensive study of prediction of absorption. Six Artificial intelligence methods namely, Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis were used for prediction of absorption of compounds with prediction accuracy of 91.54%, 88.33%, 84.30%, 86.51%, 79.07% and 80.08% respectively. Comparative analysis of all the six prediction models suggested that Support vector machine with Radial basis function based kernel is comparatively better for binary classification of compounds using human intestinal absorption and may be useful at preliminary stages of drug design and development.
Mack, Marilyn; Lapir, Gennadi M.; Berkovich, Simon
The basic property of an intelligent system, natural or artificial, is "understanding". We consider the following formalization of the idea of "understanding" among information systems. When system I issues a request to system 2, it expects a certain kind of desirable reaction. If such a reaction occurs, system I assumes that its request was "understood". In application to simple, "push-button" systems the situation is trivial because in a small system the required relationship between input requests and desired outputs could be specified exactly. As systems grow, the situation becomes more complex and matching between requests and actions becomes approximate.
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Korb, Kevin B
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
This book explores the concept of artificial intelligence based on knowledge-based algorithms. Given the current hardware and software technologies and artificial intelligence theories, we can think of how efficient to provide a solution, how best to implement a model and how successful to achieve it. This edition provides readers with the most recent progress and novel solutions in artificial intelligence. This book aims at presenting the research results and solutions of applications in relevance with artificial intelligence technologies. We propose to researchers and practitioners some methods to advance the intelligent systems and apply artificial intelligence to specific or general purpose. This book consists of 13 contributions that feature fuzzy (r, s)-minimal pre- and β-open sets, handling big coocurrence matrices, Xie-Beni-type fuzzy cluster validation, fuzzy c-regression models, combination of genetic algorithm and ant colony optimization, building expert system, fuzzy logic and neural network, ind...
van der Zant, Tijn; Kouw, Matthijs; Schomaker, Lambertus; Mueller, Vincent C.
The closed systems of contemporary Artificial Intelligence do not seem to lead to intelligent machines in the near future. What is needed are open-ended systems with non-linear properties in order to create interesting properties for the scaffolding of an artificial mind. Using post-structuralistic
Full Text Available We present the integration of artificial intelligence, robust, nonlinear and model reference adaptive control (MRACmethods for fault-tolerant control (FTC. We combine MRAC schemes with classical PID controllers, artificial neuralnetworks (ANNs, genetic algorithms (GAs, H∞ controls and sliding mode controls. Six different schemas areproposed: the first one is an MRAC with an artificial neural network and a PID controller whose parameters weretuned by a GA using Pattern Search Optimization. The second scheme is an MRAC controller with an H∞ control(H∞. The third scheme is an MRAC controller with a sliding mode controller (SMC. The fourth scheme is an MRACcontroller with an ANN. The fifth scheme is an MRAC controller with a PID controller optimized by a GA. Finally, thelast scheme is an MRAC classical control system. The objective of this research is to generate more powerful FTCmethods and compare the performance of above schemes under different fault conditions in sensors and actuators.An industrial heat exchanger process was the test bed for these approaches. Simulation results showed that the useof Pattern Search Optimization and ANNs improved the performance of the FTC scheme because it makes the controlsystem more robust against sensor and actuator faults.
Hamet, Pavel; Tremblay, Johanne
Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's "omics". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application.
Leal, Ralph A.
A survey of literature on recent advances in the field of artificial intelligence provides a comprehensive introduction to this field for the non-technical reader. Important areas covered are: (1) definitions, (2) the brain and thinking, (3) heuristic search, and (4) programing languages used in the research of artificial intelligence. Some…
The Handbook of Artificial Intelligence, Volume II focuses on the improvements in artificial intelligence (AI) and its increasing applications, including programming languages, intelligent CAI systems, and the employment of AI in medicine, science, and education. The book first elaborates on programming languages for AI research and applications-oriented AI research. Discussions cover scientific applications, teiresias, applications in chemistry, dependencies and assumptions, AI programming-language features, and LISP. The manuscript then examines applications-oriented AI research in medicine
This book presents carefully selected contributions devoted to the modern perspective of AI research and innovation. This collection covers several areas of applications and motivates new research directions. The theme across all chapters combines several domains of AI research , Computational Intelligence and Machine Intelligence including an introduction to the recent research and models. Each of the subsequent chapters reveals leading edge research and innovative solution that employ AI techniques with an applied perspective. The problems include classification of spatial images, early smoke detection in outdoor space from video images, emergent segmentation from image analysis, intensity modification in images, multi-agent modeling and analysis of stress. They all are novel pieces of work and demonstrate how AI research contributes to solutions for difficult real world problems that benefit the research community, industry and society.
Vargas Martínez, Adriana
The investigation of this thesis presents different approaches for Fault Tolerant Control based on Model Reference Adaptive Control, Artificial Neural Networks, PID controller optimized by a Genetic Algorithm, Nonlinear, Robust and Linear Parameter Varying (LPV) control for Linear Time Invariant (LTI), LPV and nonlinear systems. All of the above techniques are integrated in different controller�s structures to prove their ability to accommodate a fault. Modern systems and their challenging op...
Zhang, Yu; Xu, Jing-Liang; Yuan, Zhen-Hong; Qi, Wei; Liu, Yun-Yun; He, Min-Chao
Two artificial intelligence techniques, namely artificial neural network (ANN) and genetic algorithm (GA) were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl) carbodiimide (EDC) concentration, N-hydroxysuccinimide (NHS) concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R(2) = 0.99). Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful.
Zhang, Yu; Xu, Jing-Liang; Yuan, Zhen-Hong; Qi, Wei; Liu, Yun-Yun; He, Min-Chao
Two artificial intelligence techniques, namely artificial neural network (ANN) and genetic algorithm (GA) were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl) carbodiimide (EDC) concentration, N-hydroxysuccinimide (NHS) concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R2 = 0.99). Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful. PMID:22942683
Full Text Available The procedures of searching solutions to problems, in Artificial Intelligence, can be brought about, in many occasions, without knowledge of the Domain, and in other situations, with knowledge of it. This last procedure is usually called Heuristic Search. In such methods the matrix techniques reveal themselves as essential. Their introduction can give us an easy and precise way in the search of solution. Our paper explains how the matrix theory appears and fruitfully participates in A I, with feasible applications to Game Theory.
Full Text Available Artificial intelligence is a branch of computer science, involved in the research, design, and application of intelligent computer. Traditional methods for modeling and optimizing complex structure systems require huge amounts of computing resources, and artificial-intelligence-based solutions can often provide valuable alternatives for efficiently solving problems in the civil engineering. This paper summarizes recently developed methods and theories in the developing direction for applications of artificial intelligence in civil engineering, including evolutionary computation, neural networks, fuzzy systems, expert system, reasoning, classification, and learning, as well as others like chaos theory, cuckoo search, firefly algorithm, knowledge-based engineering, and simulated annealing. The main research trends are also pointed out in the end. The paper provides an overview of the advances of artificial intelligence applied in civil engineering.
Artificial intelligence (AI) is a computer based science which aims to simulate human brain faculties using a computational system. A brief history of this new science goes from the creation of the first artificial neuron in 1943 to the first artificial neural network application to genetic algorithms. The potential for a similar technology in medicine has immediately been identified by scientists and researchers. The possibility to store and process all medical knowledge has made this technology very attractive to assist or even surpass clinicians in reaching a diagnosis. Applications of AI in medicine include devices applied to clinical diagnosis in neurology and cardiopulmonary diseases, as well as the use of expert or knowledge-based systems in routine clinical use for diagnosis, therapeutic management and for prognostic evaluation. Biological applications include genome sequencing or DNA gene expression microarrays, modeling gene networks, analysis and clustering of gene expression data, pattern recognition in DNA and proteins, protein structure prediction. In the field of hematology the first devices based on AI have been applied to the routine laboratory data management. New tools concern the differential diagnosis in specific diseases such as anemias, thalassemias and leukemias, based on neural networks trained with data from peripheral blood analysis. A revolution in cancer diagnosis, including the diagnosis of hematological malignancies, has been the introduction of the first microarray based and bioinformatic approach for molecular diagnosis: a systematic approach based on the monitoring of simultaneous expression of thousands of genes using DNA microarray, independently of previous biological knowledge, analysed using AI devices. Using gene profiling, the traditional diagnostic pathways move from clinical to molecular based diagnostic systems.
Tilmann, Martha J.
Artificial intelligence, or the study of ideas that enable computers to be intelligent, is discussed in terms of what it is, what it has done, what it can do, and how it may affect the teaching of tomorrow. An extensive overview of artificial intelligence examines its goals and applications and types of artificial intelligence including (1) expert…
After reviewing the recent popularization of the information transmission and processing technologies, which are supported by the progress of electronics, the authors describe that by the introduction of the opto-electronics into the information technology, the possibility of applying the artificial intelligence (AI) technique to the mechanization of the information management has emerged. It is pointed out that althuogh AI deals with problems in the mental world, its basic methodology relies upon the verification by evidence, so the experiment on computers become indispensable for the study of AI. The authors also describe that as computers operate by the program, the basic intelligence which is concerned in AI is that expressed by languages. This results in the fact that the main tool of AI is the logical proof and it involves an intrinsic limitation. To answer a question “Why do you employ AI in your problem solving”, one must have ill-structured problems and intend to conduct deep studies on the thinking and the inference, and the memory and the knowledge-representation. Finally the authors discuss the application of AI technique to the information management. The possibility of the expert-system, processing of the query, and the necessity of document knowledge-base are stated.
Kamruzzaman, S M
Text classification is the process of classifying documents into predefined categories based on their content. It is the automated assignment of natural language texts to predefined categories. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. Existing supervised learning algorithms for classifying text need sufficient documents to learn accurately. This paper presents a new algorithm for text classification using artificial intelligence technique that requires fewer documents for training. Instead of using words, word relation i.e. association rules from these words is used to derive feature set from pre-classified text documents. The concept of na\\"ive Bayes classifier is then used on derived features and finally only a single concept of genetic algorithm has been added for final classification. A syste...
Zhong-Zhi Shi; Nan-Ning Zheng
Artificial Intelligence (AI) is generally considered to be a subfield of computer science, that is concerned to attempt simulation, extension and expansion of human intelligence. Artificial intelligence has enjoyed tremendous success over the last fifty years. In this paper we only focus on visual perception, granular computing, agent computing, semantic grid. Human-level intelligence is the long-term goal of artificial intelligence. We should do joint research on basic theory and technology of intelligence by brain science, cognitive science, artificial intelligence and others. A new cross discipline intelligence science is undergoing a rapid development. Future challenges are given in final section.
Full Text Available Two artificial intelligence techniques, namely artificial neural network (ANN and genetic algorithm (GA were combined to be used as a tool for optimizing the covalent immobilization of cellulase on a smart polymer, Eudragit L-100. 1-Ethyl-3-(3-dimethyllaminopropyl carbodiimide (EDC concentration, N-hydroxysuccinimide (NHS concentration and coupling time were taken as independent variables, and immobilization efficiency was taken as the response. The data of the central composite design were used to train ANN by back-propagation algorithm, and the result showed that the trained ANN fitted the data accurately (correlation coefficient R2 = 0.99. Then a maximum immobilization efficiency of 88.76% was searched by genetic algorithm at a EDC concentration of 0.44%, NHS concentration of 0.37% and a coupling time of 2.22 h, where the experimental value was 87.97 ± 6.45%. The application of ANN based optimization by GA is quite successful.
Tenório, Josceli Maria; Hummel, Anderson Diniz; Cohrs, Frederico Molina; Sdepanian, Vera Lucia; Pisa, Ivan Torres; de Fátima Marin, Heimar
Background Celiac disease (CD) is a difficult-to-diagnose condition because of its multiple clinical presentations and symptoms shared with other diseases. Gold-standard diagnostic confirmation of suspected CD is achieved by biopsying the small intestine. Objective To develop a clinical decision–support system (CDSS) integrated with an automated classifier to recognize CD cases, by selecting from experimental models developed using intelligence artificial techniques. Methods A web-based system was designed for constructing a retrospective database that included 178 clinical cases for training. Tests were run on 270 automated classifiers available in Weka 3.6.1 using five artificial intelligence techniques, namely decision trees, Bayesian inference, k-nearest neighbor algorithm, support vector machines and artificial neural networks. The parameters evaluated were accuracy, sensitivity, specificity and area under the ROC curve (AUC). AUC was used as a criterion for selecting the CDSS algorithm. A testing database was constructed including 38 clinical CD cases for CDSS evaluation. The diagnoses suggested by CDSS were compared with those made by physicians during patient consultations. Results The most accurate method during the training phase was the averaged one-dependence estimator (AODE) algorithm (a Bayesian classifier), which showed accuracy 80.0%, sensitivity 0.78, specificity 0.80 and AUC 0.84. This classifier was integrated into the web-based decision–support system. The gold-standard validation of CDSS achieved accuracy of 84.2% and k = 0.68 (p < 0.0001) with good agreement. The same accuracy was achieved in the comparison between the physician’s diagnostic impression and the gold standard k = 0. 64 (p < 0.0001). There was moderate agreement between the physician’s diagnostic impression and CDSS k = 0.46 (p = 0.0008). Conclusions The study results suggest that CDSS could be used to help in diagnosing CD, since the algorithm tested achieved excellent
Mehta, U. B.; Kutler, P.
The general principles of artificial intelligence are reviewed and speculations are made concerning how knowledge based systems can accelerate the process of acquiring new knowledge in aerodynamics, how computational fluid dynamics may use expert systems, and how expert systems may speed the design and development process. In addition, the anatomy of an idealized expert system called AERODYNAMICIST is discussed. Resource requirements for using artificial intelligence in computational fluid dynamics and aerodynamics are examined. Three main conclusions are presented. First, there are two related aspects of computational aerodynamics: reasoning and calculating. Second, a substantial portion of reasoning can be achieved with artificial intelligence. It offers the opportunity of using computers as reasoning machines to set the stage for efficient calculating. Third, expert systems are likely to be new assets of institutions involved in aeronautics for various tasks of computational aerodynamics.
Edward Wong Sek Khin
Full Text Available Internet fraud is increasing on a daily basis with new methods for extracting funds from government, corporations, businesses in general, and persons appearing almost hourly. The increases in on-line purchasing and the constant vigilance of both seller and buyer have meant that the criminal seems to be one-step ahead at all times. To pre-empt or to stop fraud before it can happen occurs in the non-computer based daily transactions of today because of the natural intelligence of the players, both seller and buyer. Currently, even with advances in computing techniques, intelligence is not the current strength of any computing system of today, yet techniques are available which may reduce the occurrences of fraud, and are usually referred to as artificial intelligence systems.This paper provides an overview of the use of current artificial intelligence (AI techniques as a means of combating fraud.Initially the paper describes how artificial intelligence techniques are employed in systems for detecting credit card fraud (online and offline fraud and insider trading.Following this, an attempt is made to propose the using of MonITARS (Monitoring Insider Trading and Regulatory Surveillance Systems framework which use a combination of genetic algorithms, neural nets and statistical analysis in detecting insider dealing. Finally, the paper discusses future research agenda to the role of using MonITARS system.
In this paper we offer a formal definition of Artificial Intelligence and this directly gives us an algorithm for construction of this object. Really, this algorithm is useless due to the combinatory explosion. The main innovation in our definition is that it does not include the knowledge as a part of the intelligence. So according to our definition a newly born baby also is an Intellect. Here we differs with Turing's definition which suggests that an Intellect is a person with knowledge gai...
nearifest tLer,sclvCs in ELO r operatii.L costs in the life C’VclE Of the ef’uijjteft. E F re\\ lously rcntione6 ey~ arrle of usingF the 1lirefineer...Ibid., p. 35. 4. Avron Barr and Edward Feigenbaum, The Handbook of Artificial Intelligence, Vol. 1, p. 2. 5. Wissam W. Ahmed, "Theories of Artificial...Barr, Avron and Geigenbaum, Edward A. ed. The Handbook of Arti- ficial Intelligence. Vol. 1. Stanford: heuristech Press. 1981. Gevartner, William B
* This publication is partially supported by the KT-DigiCult-Bg project. A definition of Artificial Intelligence (AI) was proposed in  but this definition was not absolutely formal at least because the word "Human" was used. In this paper we will formalize the definition from . The biggest problem in this definition was that the level of intelligence of AI is compared to the intelligence of a human being. In order to change this we will introduce some parameters to which AI ...
Full Text Available This paper overviews the basic principles and recent advances in the Artificial Intelligent robotics and the utilization of robots in nowadays life and the various compass. The aim of the paper is to introduce the basic concepts of artificial intelligent techniques and present a survey about robots. In first section we have a survey on the concept of artificial intelligence and intelligence life; also we introduce two important factors in artificial intelligence. In the next section, we have overview on the basic elements of artificial intelligence. Then, another important section in this paper is intelligent robots and the behavior based robotics. The use of robots in nowadays life is in the various domains. We introduce one of them that are rehabilitation robots.
Schaefer, Lloyd A.; Willenberg, James D.
Subtle indications of flaws extracted from ultrasonic waveforms. Ultrasonic-inspection system uses artificial intelligence to help in identification of hidden flaws in electron-beam-welded castings. System involves application of flaw-classification logic to analysis of ultrasonic waveforms.
DeSiano, Michael; DeSiano, Salvatore
This document provides an introduction to the relationship between the current knowledge of focused and creative thinking and artificial intelligence. A model for stages of focused and creative thinking gives: problem encounter/setting, preparation, concentration/incubation, clarification/generation and evaluation/judgment. While a computer can…
Full Text Available Problem statement: The educators argue that in the post modern world changes in the nature of work, globalization, the information revaluation and today's social challenges will all impact on educational priorities and thus will require a new mode of assessment. Approach: The objectives of this study were to: (1 Present a novel software package tool to create multiple choice and true/false exam forms. (2 Provide exams key solutions automatically (3 Meet special instructors' needs by allowing to easily incorporating multimedia elements into the exam questions, as well as the word processor editing functions and (4 Save both instructors time and money. Results: The multiform exam can be created randomly from question database or manually with shuffled answers for each question. The tool was built based on website and HTML interface using the multimedia applications, two different languages English/Arabic inserted to be used on the same time, efficient Artificial Intelligence techniques and Algorithms are used. The tool had been designed, implemented and tested by experienced instructors, with the result that efficiency, accountability and saving time improved. Conclusion/Recommendations: The transform from paper to electronic resulted in greatly enhanced user satisfaction. Editor exam tool can be used via internet without the need to download and install it to users machine, its a time saving system when multiple versions of random exams are required. This should highly motivated, instructors and teachers to utilize technology and IT to enhance exams and performance.
A new approach for the detection of real-time properties of flames is used in this project to develop improved diagnostics and controls for natural gas fired furnaces. The system utilizes video images along with advanced image analysis and artificial intelligence techniques to provide virtual sensors in a stand-alone expert shell environment. One of the sensors is a flame sensor encompassing a flame detector and a flame analyzer to provide combustion status. The flame detector can identify any burner that has not fired in a multi-burner furnace. Another sensor is a 3-D temperature profiler. One important aspect of combustion control is product quality. The 3-D temperature profiler of this on-line system is intended to provide a tool for a better temperature control in a furnace to improve product quality. In summary, this on-line diagnostic and control system offers great potential for improving furnace thermal efficiency, lowering NOx and carbon monoxide emissions, and improving product quality. The system is applicable in natural gas-fired furnaces in the glass industry and reheating furnaces used in steel and forging industries.
Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Ozer, Ozgen; Güneri, Tamer; York, Peter
The publication of the International Conference of Harmonization (ICH) Q8, Q9, and Q10 guidelines paved the way for the standardization of quality after the Food and Drug Administration issued current Good Manufacturing Practices guidelines in 2003. "Quality by Design", mentioned in the ICH Q8 guideline, offers a better scientific understanding of critical process and product qualities using knowledge obtained during the life cycle of a product. In this scope, the "knowledge space" is a summary of all process knowledge obtained during product development, and the "design space" is the area in which a product can be manufactured within acceptable limits. To create the spaces, artificial neural networks (ANNs) can be used to emphasize the multidimensional interactions of input variables and to closely bind these variables to a design space. This helps guide the experimental design process to include interactions among the input variables, along with modeling and optimization of pharmaceutical formulations. The objective of this study was to develop an integrated multivariate approach to obtain a quality product based on an understanding of the cause-effect relationships between formulation ingredients and product properties with ANNs and genetic programming on the ramipril tablets prepared by the direct compression method. In this study, the data are generated through the systematic application of the design of experiments (DoE) principles and optimization studies using artificial neural networks and neurofuzzy logic programs.
Levitt, TS; Lemmer, JF; Shachter, RD
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Curran, Kevin; Nichols, Eric; Xie, Ermai; Harper, Roy
Background Software to help control diabetes is currently an embryonic market with the main activity to date focused mainly on the development of noncomputerized solutions, such as cardboard calculators or computerized solutions that use “flat” computer models, which are applied to each person without taking into account their individual lifestyles. The development of true, mobile device-driven health applications has been hindered by the lack of tools available in the past and the sheer lack of mobile devices on the market. This has now changed, however, with the availability of pocket personal computer handsets. Method This article describes a solution in the form of an intelligent neural network running on mobile devices, allowing people with diabetes access to it regardless of their location. Utilizing an easy to learn and use multipanel user interface, people with diabetes can run the software in real time via an easy to use graphical user interface. The neural network consists of four neurons. The first is glucose. If the user's current glucose level is within the target range, the glucose weight is then multiplied by zero. If the glucose level is high, then there will be a positive value multiplied to the weight, resulting in a positive amount of insulin to be injected. If the user's glucose level is low, then the weights will be multiplied by a negative value, resulting in a decrease in the overall insulin dose. Results A minifeasibility trial was carried out at a local hospital under a consultant endocrinologist in Belfast. The short study ran for 2 weeks with six patients. The main objectives were to investigate the user interface, test the remote sending of data over a 3G network to a centralized server at the university, and record patient data for further proofing of the neural network. We also received useful feedback regarding the user interface and the feasibility of handing real-world patients a new mobile phone. Results of this short trial
Poehlman, W. F. S.; Garland, Wm. J.; Stark, J. W.
In an era of downsizing and a limited pool of skilled accelerator personnel from which to draw replacements for an aging workforce, the impetus to integrate intelligent computer automation into the accelerator operator's repertoire is strong. However, successful deployment of an "Operator's Companion" is not trivial. Both graphical and human factors need to be recognized as critical areas that require extra care when formulating the Companion. They include interactive graphical user's interface that mimics, for the operator, familiar accelerator controls; knowledge of acquisition phases during development must acknowledge the expert's mental model of machine operation; and automated operations must be seen as improvements to the operator's environment rather than threats of ultimate replacement. Experiences with the PACES Accelerator Operator Companion developed at two sites over the past three years are related and graphical examples are given. The scale of the work involves multi-computer control of various start-up/shutdown and tuning procedures for Model FN and KN Van de Graaff accelerators. The response from licensing agencies has been encouraging.
Soil temperature is a meteorological data directly affecting the formation and development of plants of all kinds. Soil temperatures are usually estimated with various models including the artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models. Soil temperatures along with other climate data are recorded by the Turkish State Meteorological Service (MGM) at specific locations all over Turkey. Soil temperatures are commonly measured at 5-, 10-, 20-, 50-, and 100-cm depths below the soil surface. In this study, the soil temperature data in monthly units measured at 261 stations in Turkey having records of at least 20 years were used to develop relevant models. Different input combinations were tested in the ANN and ANFIS models to estimate soil temperatures, and the best combination of significant explanatory variables turns out to be monthly minimum and maximum air temperatures, calendar month number, depth of soil, and monthly precipitation. Next, three standard error terms (mean absolute error (MAE, °C), root mean squared error (RMSE, °C), and determination coefficient (R 2 )) were employed to check the reliability of the test data results obtained through the ANN, ANFIS, and MLR models. ANFIS (RMSE 1.99; MAE 1.09; R 2 0.98) is found to outperform both ANN and MLR (RMSE 5.80, 8.89; MAE 1.89, 2.36; R 2 0.93, 0.91) in estimating soil temperature in Turkey.
This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.
Biefeld, Eric W.; Cooper, Lynne P.
Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.
Full Text Available The induction machine is experiencing a growing success for two decades by gradually replacing the DC machines and synchronous in many industrial applications. This paper is devoted to the study of advanced methods applied to the command of the asynchronous machine in order to obtain a system of control of high performance. While the criteria for response time, overtaking, and static error can be assured by the techniques of conventional control, the criterion of robustness remains a challenge for researchers. This criterion can be satisfied only by applying advanced techniques of command. After mathematical modeling of the asynchronous machine, it defines the control strategies based on the orientation of the rotor flux. The results of the different simulation tests highlight the properties of robustness of algorithms proposed and suggested to compare the different control strategies.
Babu, P. Ravi; Divya, V. P. Sree
The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.
Poirot, James L.; Norris, Cathleen A.
This first in a projected series of five articles discusses artificial intelligence and its impact on education. Highlights include the history of artificial intelligence and the impact of microcomputers; learning processes; human factors and interfaces; computer assisted instruction and intelligent tutoring systems; logic programing; and expert…
Nilsson, Nils J.
This paper presents the view that artificial intelligence (AI) is primarily concerned with propositional languages for representing knowledge and with techniques for manipulating these representations. In this respect, AI is analogous to applied in a variety of other subject areas. Typically, AI research (or should be) more concerned with the general form and properties of representational languages and methods than it is with the context being described by these languages. Notable exceptions...
Rich, C.; Waters, R.C.
Research at the intersection of artificial intelligence and software engineering is important to both AI researchers and software engineers. For AI, programming is a domain that stimulates research in knowledge representation and automated reasoning. In software engineering, AI techniques are being applied to a new generation of programming tools. This book covers a wide spectrum of work in this area. Some of the topics covered include deductive synthesis, program verification, and transformational approaches.
Shachter, RD; Henrion, M; Lemmer, JF
This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und
Prince, Mary Ellen
Artificial intelligence (AI) is a growing field which is just beginning to make an impact on disciplines other than computer science. While a number of military and commercial applications were undertaken in recent years, few attempts were made to apply AI techniques to basic scientific research. There is no inherent reason for the discrepancy. The characteristics of the problem, rather than its domain, determines whether or not it is suitable for an AI approach. Expert system, intelligent tutoring systems, and learning programs are examples of theoretical topics which can be applied to certain areas of scientific research. Further research and experimentation should eventurally make it possible for computers to act as intelligent assistants to scientists.
Artificial intelligence (AI) is defined and related to intelligent computer-assisted instruction (ICAI) and science education. Modeling the student, the teacher, and the natural environment are discussed as important parts of ICAI and the concept of microworlds as a powerful tool for science education is presented. Optimistic predictions about ICAI are tempered with the complex, persistent problems of: 1) teaching and learning as a soft or fuzzy knowledge base, 2) natural language processing, and 3) machine learning. The importance of accurate diagnosis of a student's learning state, including misconceptions and naive theories about nature, is stressed and related to the importance of accurate diagnosis by a physician. Based on the cognitive science/AI paradigm, a revised model of the well-known Karplus/Renner learning cycle is proposed.
Schorr, Herbert; Rappaport, Alain
Papers concerning applications of artificial intelligence are presented, covering applications in aerospace technology, banking and finance, biotechnology, emergency services, law, media planning, music, the military, operations management, personnel management, retail packaging, and manufacturing assembly and design. Specific topics include Space Shuttle telemetry monitoring, an intelligent training system for Space Shuttle flight controllers, an expert system for the diagnostics of manufacturing equipment, a logistics management system, a cooling systems design assistant, and a knowledge-based integrated circuit design critic. Additional topics include a hydraulic circuit design assistant, the use of a connector assembly specification expert system to harness detailed assembly process knowledge, a mixed initiative approach to airlift planning, naval battle management decision aids, an inventory simulation tool, a peptide synthesis expert system, and a system for planning the discharging and loading of container ships.
Anken, Craig S.
The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.
Moore, Jason H; Hill, Doug P
Here we introduce artificial intelligence (AI) methodology for detecting and characterizing epistasis in genetic association studies. The ultimate goal of our AI strategy is to analyze genome-wide genetics data as a human would using sources of expert knowledge as a guide. The methodology presented here is based on computational evolution, which is a type of genetic programming. The ability to generate interesting solutions while at the same time learning how to solve the problem at hand distinguishes computational evolution from other genetic programming approaches. We provide a general overview of this approach and then present a few examples of its application to real data.
Hill, Gary C.
Designer and design team productivity improves with skill, experience, and the tools available. The design process involves numerous trials and errors, analyses, refinements, and addition of details. Computerized tools have greatly speeded the analysis, and now new theories and methods, emerging under the label Artificial Intelligence (AI), are being used to automate skill and experience. These tools improve designer productivity by capturing experience, emulating recognized skillful designers, and making the essence of complex programs easier to grasp. This paper outlines the aircraft design process in today's technology and business climate, presenting some of the challenges ahead and some of the promising AI methods for meeting these challenges.
Tomorrow begins right here as we embark on an enthralling and jargon-free journey into the world of computers and the inner recesses of the human mind. Readers encounter everything from the nanotechnology used to make insect-like robots, to computers that perform surgery, in addition to discovering the biggest controversies to dog the field of AI. Blay Whitby is a Lecturer on Cognitive Science and Artificial Intelligence at the University of Sussex UK. He is the author of two books and numerous papers.
Parkes, David C; Wellman, Michael P
The field of artificial intelligence (AI) strives to build rational agents capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics. We review progress toward creating this new species of machine, machina economicus, and discuss some challenges in designing AIs that can reason effectively in economic contexts. Supposing that AI succeeds in this quest, or at least comes close enough that it is useful to think about AIs in rationalistic terms, we ask how to design the rules of interaction in multi-agent systems that come to represent an economy of AIs. Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people.
Full Text Available Today the most commonly used techniques for credit scoring are artificial intelligence and statistics. In this paper, we started a new way to use these two kinds of models. Through logistic regression filters the variables with a high degree of correlation, artificial intelligence models reduce complexity and accelerate convergence, while these models hybridizing logistic regression have better explanations in statistically significance, thus improve the effect of artificial intelligence models. With experiments on German data set, we find an interesting phenomenon defined as ‘Dimensional interference’ with support vector machine and from cross validation it can be seen that the new method gives a lot of help with credit scoring.
Gives a brief outline of the development of Artificial Intelligence in Education (AIED) which includes psychology, education, cognitive science, computer science, and artificial intelligence. Highlights include learning environments; learner modeling; a situated approach to learning; and current examples of AIED research. (LRW)
Alsinet, Teresa; Puyol-Gruart, Josep; Torras, Carme
Artificial Intelligence Research and Development. Proceedings of the 11th International Conference of the Catalan Association for Artificial Intelligence. Volume 184 Frontiers in Artificial Intelligence and Applications Peer Reviewed
Han, The Anh
This original and timely monograph describes a unique self-contained excursion that reveals to the readers the roles of two basic cognitive abilities, i.e. intention recognition and arranging commitments, in the evolution of cooperative behavior. This book analyses intention recognition, an important ability that helps agents predict others’ behavior, in its artificial intelligence and evolutionary computational modeling aspects, and proposes a novel intention recognition method. Furthermore, the book presents a new framework for intention-based decision making and illustrates several ways in which an ability to recognize intentions of others can enhance a decision making process. By employing the new intention recognition method and the tools of evolutionary game theory, this book introduces computational models demonstrating that intention recognition promotes the emergence of cooperation within populations of self-regarding agents. Finally, the book describes how commitment provides a pathway to the evol...
A majority of the research performed today explore artificial intelligence in smart homes by using a centralized approach where a smart home server performs the necessary calculations. This approach has some disadvantages that can be overcome by shifting focus to a distributed approach where...... the artificial intelligence system is implemented as distributed as agents running parts of the artificial intelligence system. This paper presents a distributed smart home architecture that distributes artificial intelligence in smart homes and discusses the pros and cons of such a concept. The presented...... distributed model is a layered model. Each layer offers a different complexity level of the embedded distributed artificial intelligence. At the lowest layer smart objects exists, they are small cheap embedded microcontroller based smart devices that are powered by batteries. The next layer contains a more...
Szolovits, P; Patil, R S; Schwartz, W B
In an attempt to overcome limitations inherent in conventional computer-aided diagnosis, investigators have created programs that simulate expert human reasoning. Hopes that such a strategy would lead to clinically useful programs have not been fulfilled, but many of the problems impeding creation of effective artificial intelligence programs have been solved. Strategies have been developed to limit the number of hypotheses that a program must consider and to incorporate pathophysiologic reasoning. The latter innovation permits a program to analyze cases in which one disorder influences the presentation of another. Prototypes embodying such reasoning can explain their conclusions in medical terms that can be reviewed by the user. Despite these advances, further major research and developmental efforts will be necessary before expert performance by the computer becomes a reality.
Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.
Software tool for resolution of inverse problems using artificial intelligence techniques: an application in neutron spectrometry; Herramienta en software para resolucion de problemas inversos mediante tecnicas de inteligencia artificial: una aplicacion en espectrometria neutronica
Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R. [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico); Mendez, R. [CIEMAT, Departamento de Metrologia de Radiaciones Ionizantes, Laboratorio de Patrones Neutronicos, Av. Complutense 22, 28040 Madrid (Spain); Gallego, E. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Sousa L, M. A. [Comision Nacional de Energia Nuclear, Centro de Investigacion de Tecnologia Nuclear, Av. Pte. Antonio Carlos 6627, Pampulha, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)
With the considerable increase of AI applications, AI is being increasingly used to solve optimization problems in engineering. In the past two decades, the applications of artificial intelligence in power systems have attracted much research. This book covers the current level of applications of artificial intelligence to the optimization problems in power systems. This book serves as a textbook for graduate students in electric power system management and is also be useful for those who are interested in using artificial intelligence in power system optimization.
An artificial-intelligence system uses machine learning from massive training sets to teach itself to play 49 classic computer games, demonstrating that it can adapt to a variety of tasks. See Letter p.529
This book examines the application of artificial intelligence methods to model economic data. It addresses causality and proposes new frameworks for dealing with this issue. It also applies evolutionary computing to model evolving economic environments.
Dugel-Whitehead, Norma R.
This talk will present the work which has been done at NASA Marshall Space Flight Center involving the use of Artificial Intelligence to control the power system in a spacecraft. The presentation will include a brief history of power system automation, and some basic definitions of the types of artificial intelligence which have been investigated at MSFC for power system automation. A video tape of one of our autonomous power systems using co-operating expert systems, and advanced hardware will be presented.
A document consisting mostly of lecture slides presents overviews of artificial-intelligence-based control methods now under development for application to robotic aircraft [called Unmanned Aerial Vehicles (UAVs) in the paper] and spacecraft and to the next generation of flight controllers for piloted aircraft. Following brief introductory remarks, the paper presents background information on intelligent control, including basic characteristics defining intelligent systems and intelligent control and the concept of levels of intelligent control. Next, the paper addresses several concepts in intelligent flight control. The document ends with some concluding remarks, including statements to the effect that (1) intelligent control architectures can guarantee stability of inner control loops and (2) for UAVs, intelligent control provides a robust way to accommodate an outer-loop control architecture for planning and/or related purposes.
Full Text Available The historical origin of the Artificial Intelligence (AI is usually established in the Dartmouth Conference, of 1956. But we can find many more arcane origins . Also, we can consider, in more recent times, very great thinkers, as Janos Neumann (then, John von Neumann, arrived in USA, Norbert Wiener, Alan Mathison Turing, or Lofti Zadeh, for instance [12, 14]. Frequently AI requires Logic. But its Classical version shows too many insufficiencies. So, it was necessary to introduce more sophisticated tools, as Fuzzy Logic, Modal Logic, Non-Monotonic Logic and so on [1, 2]. Among the things that AI needs to represent are categories, objects, properties, relations between objects, situations, states, time, events, causes and effects, knowledge about knowledge, and so on. The problems in AI can be classified in two general types [3, 5], search problems and representation problems. On this last "peak", there exist different ways to reach their summit. So, we have  Logics, Rules, Frames, Associative Nets, Scripts, and so on, many times connected among them. We attempt, in this paper, a panoramic vision of the scope of application of such representation methods in AI. The two more disputable questions of both modern philosophy of mind and AI will be perhaps the Turing Test and the Chinese Room Argument. To elucidate these very difficult questions, see our final note.
Over the coming decades, Artificial Intelligence will profoundly impact the way we work and live. Whose interests should such systems serve? What limits should we place on their use? This book is a succinct introduction to the complex social, ethical, legal, and economic issues raised by the emergence of intelligent machines.
Technology Teacher, 1987
Introduces the concept of artificial intelligence, discusses where it is currently used, and describes an expert computer system that can be used in the technology laboratory. Included is a learning activity that describes ideas for using intelligent computers as problem-solving tools. (Author/CH)
Stewart, Helen (Editor)
This report contains information on the activities of the Artificial Intelligence Research Branch (FIA) at NASA Ames Research Center (ARC) in 1992, as well as planned work in 1993. These activities span a range from basic scientific research through engineering development to fielded NASA applications, particularly those applications that are enabled by basic research carried out in FIA. Work is conducted in-house and through collaborative partners in academia and industry. All of our work has research themes with a dual commitment to technical excellence and applicability to NASA short, medium, and long-term problems. FIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at the Jet Propulsion Laboratory (JPL) and AI applications groups throughout all NASA centers. This report is organized along three major research themes: (1) Planning and Scheduling: deciding on a sequence of actions to achieve a set of complex goals and determining when to execute those actions and how to allocate resources to carry them out; (2) Machine Learning: techniques for forming theories about natural and man-made phenomena; and for improving the problem-solving performance of computational systems over time; and (3) Research on the acquisition, representation, and utilization of knowledge in support of diagnosis design of engineered systems and analysis of actual systems.
Jothiprakash, V.; Magar, R. B.
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Liu, Chen-Ching; Edris, Abdel-Aty
Provides insight on both classical means and new trends in the application of power electronic and artificial intelligence techniques in power system operation and control This book presents advanced solutions for power system controllability improvement, transmission capability enhancement and operation planning. The book is organized into three parts. The first part describes the CSC-HVDC and VSC-HVDC technologies, the second part presents the FACTS devices, and the third part refers to the artificial intelligence techniques. All technologies and tools approached in this book are essential for power system development to comply with the smart grid requirements.
The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, wit...
Workman, Gary L.; Kaukler, William F.
Materials science and engineering provides a vast arena for applications of artificial intelligence. Advanced materials research is an area in which challenging requirements confront the researcher, from the drawing board through production and into service. Advanced techniques results in the development of new materials for specialized applications. Hand-in-hand with these new materials are also requirements for state-of-the-art inspection methods to determine the integrity or fitness for service of structures fabricated from these materials. Two problems of current interest to the Materials Processing Laboratory at UAH are an expert system to assist in eddy current inspection of graphite epoxy components for aerospace and an expert system to assist in the design of superalloys for high temperature applications. Each project requires a different approach to reach the defined goals. Results to date are described for the eddy current analysis, but only the original concepts and approaches considered are given for the expert system to design superalloys.
Malluhi, Qutaibah; Gonzalez, Sara; Bocewicz, Grzegorz; Bucciarelli, Edgardo; Giulioni, Gianfranco; Iqba, Farkhund
The 12th International Symposium on Distributed Computing and Artificial Intelligence 2015 (DCAI 2015) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This symposium is organized by the Osaka Institute of Technology, Qatar University and the University of Salamanca.
Neves, José; Rodriguez, Juan; Santana, Juan; Gonzalez, Sara
The International Symposium on Distributed Computing and Artificial Intelligence 2013 (DCAI 2013) is a forum in which applications of innovative techniques for solving complex problems are presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. This conference is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and industry se...
Santana, Juan; González, Sara; Molina, Jose; Bernardos, Ana; Rodríguez, Juan; DCAI 2012; International Symposium on Distributed Computing and Artificial Intelligence 2012
The International Symposium on Distributed Computing and Artificial Intelligence 2012 (DCAI 2012) is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. This conference is a forum in which applications of innovative techniques for solving complex problems will be presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and indus...
James, Alex Pappachen
We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high level intelligence problems such as sparse coding and contextual processing.
Artificial intelligence (AI) techniques and virtual reality (VR) make possible powerful interactive stories, and this paper focuses on examples of virtual characters in three dimensional (3-D) worlds. Waldern, a virtual reality game designer, has theorized about and implemented software design of virtual teammates and opponents that incorporate AI…
This book gives an overview of methods developed in artificial intelligence for search, learning, problem solving and decision-making. It gives an overview of algorithms and architectures of artificial intelligence that have reached the degree of maturity when a method can be presented as an algorithm, or when a well-defined architecture is known, e.g. in neural nets and intelligent agents. It can be used as a handbook for a wide audience of application developers who are interested in using artificial intelligence methods in their software products. Parts of the text are rather independent, so that one can look into the index and go directly to a description of a method presented in the form of an abstract algorithm or an architectural solution. The book can be used also as a textbook for a course in applied artificial intelligence. Exercises on the subject are added at the end of each chapter. Neither programming skills nor specific knowledge in computer science are expected from the reader. However, some p...
Although many texts exist offering an introduction to artificial intelligence (AI), this book is unique in that it places an emphasis on knowledge representation (KR) concepts. It includes small-scale implementations in PROLOG to illustrate the major KR paradigms and their developments.****back cover copy:**Knowledge representation is at the heart of the artificial intelligence enterprise: anyone writing a program which seeks to work by encoding and manipulating knowledge needs to pay attention to the scheme whereby he will represent the knowledge, and to be aware of the consequences of the ch
Banerjee, R; Bradshaw, Gary; Carbonell, Jaime Guillermo; Mitchell, Tom Michael; Michalski, Ryszard Spencer
Machine Learning: An Artificial Intelligence Approach contains tutorial overviews and research papers representative of trends in the area of machine learning as viewed from an artificial intelligence perspective. The book is organized into six parts. Part I provides an overview of machine learning and explains why machines should learn. Part II covers important issues affecting the design of learning programs-particularly programs that learn from examples. It also describes inductive learning systems. Part III deals with learning by analogy, by experimentation, and from experience. Parts IV a
The human brain can solve highly abstract reasoning problems using a neural network that is entirely physical. The underlying mechanisms are only partially understood, but an artificial network provides valuable insight. See Article p.471
FINAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Strong Artificial Intelligence and National Security 5a...Prominent business and science leaders believe that technological advances will soon allow humankind to develop artificial intelligence (AI) that...its potential strategic pitfalls. 15. SUBJECT TERMS Artificial Intelligence 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT
Abstraction is a fundamental mechanism underlying both human and artificial perception, representation of knowledge, reasoning and learning. This mechanism plays a crucial role in many disciplines, notably Computer Programming, Natural and Artificial Vision, Complex Systems, Artificial Intelligence and Machine Learning, Art, and Cognitive Sciences. This book first provides the reader with an overview of the notions of abstraction proposed in various disciplines by comparing both commonalities and differences. After discussing the characterizing properties of abstraction, a formal model, the K
Dr. N. B. Chaphalkar
Full Text Available Real properties possess value which is dependent on numerous factors. Investors and owners of the property are interested in the maximum returns, it would fetch. Considering the amount of money involved in real estate, there is a need of accurate prediction of returns and associated risks. This necessitates use of Artificial Intelligence (AI prediction models. This study attempts to analyze and summarize AI techniques, which gives insight to application of various techniques for prediction related to property valuation. Comparison of various techniques shows that Artificial Neural Network (ANNand fuzzy logic are better suited if attributes and model parameters are appropriately selected.
Schwuttke, Ursula M.
"Dynamic tradeoff evaluation" (DTE) denotes proposed method and procedure for restructuring problem-solving strategies in artificial intelligence to satisfy need for timely responses to changing conditions. Detects situations in which optimal problem-solving strategies cannot be pursued because of real-time constraints, and effects tradeoffs among nonoptimal strategies in such way to minimize adverse effects upon performance of system.
McConnell, Barry A.; McConnell, Nancy J.
Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)
Full Text Available Software engineering and artificial intelligence is the two field of the computer science. During the last decades, the disciplines of Artificial Intelligence and Software Engineering have developedseparately without the much exchange of research outcomes. However, both fields of computer science have different characteristics, benefits and limitations. This statement opens many possibilities and ideas for research. One idea is that the researcher applies the available methods, tools and techniques of Artificial Intelligence to Software Engineering and Software Engineering to Artificial Intelligence in a manner that good things, feature, characteristic and advantages of the both fields is taken up, and the limitations will reduces. During applicability, an intersection area is found between AI and SE, which forms the relation between AI and SE. The work in this paper discusses the factor that come while communicating between AI and SE such as Communication, objective, Problem and reasons for adopting. This work explores the framework of interaction on which both fields are communicate with each other. This framework has four major classes of interaction such as software support environment, AI tools and techniques in conventional software, Use of conventional software technology and Methodological considerations. This paper introduces the relation between AI and SE, and varioustechniques evolved while merging.
Cristani, Matteo; Karafili, Erisa; Tomazzoli, Claudio
Energy saving is one of the most challenging aspects of modern ambient intelligence technologies, for both domestic and business usages. In this paper we show how to combine Ambient Intelligence and Artificial Intelligence techniques to solve the problem of scheduling a set of devices under a given...... for Ambient Intelligence to a specific framework and exhibit a sample usage for a real life system, Elettra, that is in use in an industrial context....
Techniques of artificial intelligence applied to the electric power expansion distribution system planning problem; Tecnicas de inteligencia artificial aplicadas ao problema de planejamento da expansao do sistema de distribuicao de energia eletrica
Froes, Salete Maria
A tool named Constrained Decision Problem (CDP), which is based on Artificial Intelligence and a specific application to Distribution System Planning is described. The CDP allows multiple objective optimization that does not, necessarily, result in a single optimal solution. First, a literature review covers published works related to Artificial Intelligence applications to Electric Power Distribution Systems, emphasizing feeder restoration and reconfiguration. Some concepts related to Artificial Intelligence are described, with particular attention to Planning and to Constrained Decision Problems. Following, an Electric Power System planning model is addressed by using the CDP tool. Some case studies illustrate the Distribution Planning model, which are compared with standard optimization models. Concluding, some comments establishing the possibilities of CDP applications are followed by a view on future developments. (author)
develop crisp rules. 23 - ~. . . . . . . . . . . . This very large data base is far beyond human capacity to fully understand or translate * into an...TRAINING "-..,-’ Ruston M. Hunt, Richard L. Henneman , William B. Rouse, Characterizing the Develop- 9 ment of Human Expertise in Simulated Fault Diagnosis...information reduction), 2) Missile Guidance, 3) Robotic Tanks, 4) Intelligence ( translate language, read in interpretations based on dialects, local
Nilsson, Nils J
Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading re
The book presents new clustering schemes, dynamical systems and pattern recognition algorithms in geophysical, geodynamical and natural hazard applications. The original mathematical technique is based on both classical and fuzzy sets models. Geophysical and natural hazard applications are mostly original. However, the artificial intelligence technique described in the book can be applied far beyond the limits of Earth science applications. The book is intended for research scientists, tutors, graduate students, scientists in geophysics and engineers
Moja, D. C.
Artificial Intelligence (AI) is currently being used for business-oriented, money-making applications, such as medical diagnosis, computer system configuration, and geological exploration. The present paper has the objective to assess new AI tools and techniques which will be available to assist aerospace managers in the accomplishment of their tasks. A study conducted by Brown and Cheeseman (1983) indicates that AI will be employed in all traditional management areas, taking into account goal setting, decision making, policy formulation, evaluation, planning, budgeting, auditing, personnel management, training, legal affairs, and procurement. Artificial intelligence/expert systems are discussed, giving attention to the three primary areas concerned with intelligent robots, natural language interfaces, and expert systems. Aspects of information retrieval are also considered along with the decision support system, and expert systems for project planning and scheduling.
Artificial intelligence is moving to a next step of development and application areas. From electronic games to human-like robots, AI toy is a good choice for next step during this process. Technology-based design is fit to the development of AI toy. It can exert the advantages and explore more...... value for existing resources. It combines AI programs and common sensors to realize the function of intelligence input and output. Designers can use technology-based criteria to design and need to consider the possible issues in this new field. All of these aspects can be referenced from electronic game...
Lund, Henrik Hautop; Mayoh, Brian Henry; Perram, John
The book covers the seventh Scandinavian Conference on Artificial Intelligence, held at the Maersk Mc-Kinney Moller Institute for Production Technology at the University of Southern Denmark during the period 20-21 February, 2001. It continues the tradition established by SCAI of being one...... of the most important regional AI conferences in Europe, attracting high quality submissions from Scandinavia and the rest of the world, including the Baltic countries. The contents include robotics, sensor/motor intelligence, evolutionary robotics, behaviour-based systems, multi-agent systems, applications...
Sergey F. Sergeev
Full Text Available In the present article we show the link between both artificial and natural intelligence and the system’s complexity during the life-cycle. Autopoetic’s type of living systems determines the differences between natural and artificial intelligence; artificial environments have an influence to the intelligence abilities development. We present the «diffusion intellect» concept where the diffusion intellect is considered as a synergistic unity of natural and artificial intellect in organized environments.
Briegel, Hans J
We propose a notion of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied-cognitive-science approach to intelligent action and learning. While the scheme works entirely classically, it also provides a natural route for generalization to quantum-mechanical operation.
Briegel, Hans J.; De las Cuevas, Gemma
We propose a model of a learning agent whose interaction with the environment is governed by a simulation-based projection, which allows the agent to project itself into future situations before it takes real action. Projective simulation is based on a random walk through a network of clips, which are elementary patches of episodic memory. The network of clips changes dynamically, both due to new perceptual input and due to certain compositional principles of the simulation process. During simulation, the clips are screened for specific features which trigger factual action of the agent. The scheme is different from other, computational, notions of simulation, and it provides a new element in an embodied cognitive science approach to intelligent action and learning. Our model provides a natural route for generalization to quantum-mechanical operation and connects the fields of reinforcement learning and quantum computation. PMID:22590690
Turan, Nurdan Gamze; Gümüşel, Emine Beril; Ozgonenel, Okan
An intensive study has been made to see the performance of the different liner materials with bentonite on the removal efficiency of Cu(II) and Zn(II) from industrial leachate. An artificial neural network (ANN) was used to display the significant levels of the analyzed liner materials on the removal efficiency. The statistical analysis proves that the effect of natural zeolite was significant by a cubic spline model with a 99.93% removal efficiency. Optimization of liner materials was achieved by minimizing bentonite mixtures, which were costly, and maximizing Cu(II) and Zn(II) removal efficiency. The removal efficiencies were calculated as 45.07% and 48.19% for Cu(II) and Zn(II), respectively, when only bentonite was used as liner material. However, 60% of natural zeolite with 40% of bentonite combination was found to be the best for Cu(II) removal (95%), and 80% of vermiculite and pumice with 20% of bentonite combination was found to be the best for Zn(II) removal (61.24% and 65.09%). Similarly, 60% of natural zeolite with 40% of bentonite combination was found to be the best for Zn(II) removal (89.19%), and 80% of vermiculite and pumice with 20% of bentonite combination was found to be the best for Zn(II) removal (82.76% and 74.89%). PMID:23844384
James, Alex Pappachen
We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...
Kanal, LN; Kumar, V; Suttner, CB
Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus
Merbler, J B
This article describes a three-phase program for training special education teachers to teach Logo and artificial intelligence. Logo is derived from the LISP computer language and is relatively simple to learn and use, and it is argued that these factors make it an ideal tool for classroom experimentation in basic artificial intelligence concepts. The program trains teachers to develop simple demonstrations of artificial intelligence using Logo. The material that the teachers learn to teach is suitable as an advanced level topic for intermediate- through secondary-level students enrolled in computer competency or similar courses. The material emphasizes problem-solving and thinking skills using a nonverbal expressive medium (Logo), thus it is deemed especially appropriate for hearing-impaired children. It is also sufficiently challenging for academically talented children, whether hearing or deaf. Although the notion of teachers as programmers is controversial, Logo is relatively easy to learn, has direct implications for education, and has been found to be an excellent tool for empowerment-for both teachers and children.
The work presented in this dissertation revolves around the problem of designing artificial intelligence (AI) for video games. This problem becomes increasingly challenging as video games grow in complexity. With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of these environments is pressing. Although machine learning techniques are being successfully applied in a multitude of d...
Lucas, Simon M.; Mateas, Michael; Preuss, Mike; Spronck, Pieter; Togelius, Julian
This report documents Dagstuhl Seminar 15051 "Artificial and Computational Intelligence in Games: Integration". The focus of the seminar was on the computational techniques used to create, enhance, and improve the experiences of humans interacting with and within virtual environments. Different researchers in this field have different goals, including developing and testing new AI methods, creating interesting and believable non-player characters, improving the game production pipeline, study...
Rigas, H.; Booth, T.; Briggs, F.; Murata, T.; Stone, H.S.
The progress, goals and techniques being used in the Japanese fifth-generation computer program are assessed. The research is being performed in three phases: tool building, construction of parallel architecture machines, and evaluation and refinement. The first phase is well under way and has yielded designs for two prototype machines: a Personal Sequential Interface (PSI) workstation and the Delta machine (DM), a relational database machine. Kernel Language 0 (KL0), used for the PSI, is being expanded to KL1. The Mandala language is being applied in the DM. Applications have not received a great deal of attention at the government-funded research center, although the techniques developed are already being implemented in industry for machine and computer design and communications systems. 18 references.
Models of signal validation using artificial intelligence techniques applied to a nuclear reactor; Modelos de validacao de sinal utilizando tecnicas de inteligencia artificial aplicados a um reator nuclear
Oliveira, Mauro V. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia
This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)
Bersini, Hugues; Corchado, Juan; Rodríguez, Sara; Pawlewski, Paweł; Bucciarelli, Edgardo
The 11th International Symposium on Distributed Computing and Artificial Intelligence 2014 (DCAI 2014) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This year’s technical program presents both high quality and diversity, with contributions in well-established and evolving areas of research (Algeria, Brazil, China, Croatia, Czech Republic, Denmark, France, Germany, Ireland, Italy, Japan, Malaysia, Mexico, Poland, Portugal, Republic of Korea, Spain, Taiwan, Tunisia, Ukraine, United Kingdom), representing ...
The challenge of this work is to connect physics with the concept of intelligence. By intelligence we understand a capability to move from disorder to order without external resources, i.e., in violation of the second law of thermodynamics. The objective is to find such a mathematical object described by ODE that possesses such a capability. The proposed approach is based upon modification of the Madelung version of the Schrodinger equation by replacing the force following from quantum potential with non-conservative forces that link to the concept of information. A mathematical formalism suggests that a hypothetical intelligent particle, besides the capability to move against the second law of thermodynamics, acquires such properties like self-image, self-awareness, self-supervision, etc. that are typical for Livings. However since this particle being a quantum-classical hybrid acquires non-Newtonian and non-quantum properties, it does not belong to the physics matter as we know it: the modern physics should be complemented with the concept of the information force that represents a bridge to intelligent particle. As a follow-up of the proposed concept, the following question is addressed: can artificial intelligence (AI) system composed only of physical components compete with a human? The answer is proven to be negative if the AI system is based only on simulations, and positive if digital devices are included. It has been demonstrated that there exists such a quantum neural net that performs simulations combined with digital punctuations. The universality of this quantum-classical hybrid is in capability to violate the second law of thermodynamics by moving from disorder to order without external resources. This advanced capability is illustrated by examples. In conclusion, a mathematical machinery of the perception that is the fundamental part of a cognition process as well as intelligence is introduced and discussed.
Rienow, A.; Menz, G.
Since the beginning of the millennium, artificial intelligence techniques as cellular automata (CA) and multi-agent systems (MAS) have been incorporated into land-system simulations to address the complex challenges of transitions in urban areas as open, dynamic systems. The study presents a hybrid modeling approach for modeling the two antagonistic processes of urban sprawl and urban decline at once. The simulation power of support vector machines (SVM), cellular automata (CA) and multi-agent systems (MAS) are integrated into one modeling framework and applied to the largest agglomeration of Central Europe: the Ruhr. A modified version of SLEUTH (short for Slope, Land-use, Exclusion, Urban, Transport, and Hillshade) functions as the CA component. SLEUTH makes use of historic urban land-use data sets and growth coefficients for the purpose of modeling physical urban expansion. The machine learning algorithm of SVM is applied in order to enhance SLEUTH. Thus, the stochastic variability of the CA is reduced and information about the human and ecological forces driving the local suitability of urban sprawl is incorporated. Subsequently, the supported CA is coupled with the MAS ReHoSh (Residential Mobility and the Housing Market of Shrinking City Systems). The MAS models population patterns, housing prices, and housing demand in shrinking regions based on interactions between household and city agents. Semi-explicit urban weights are introduced as a possibility of modeling from and to the pixel simultaneously. Three scenarios of changing housing preferences reveal the urban development of the region in terms of quantity and location. They reflect the dissemination of sustainable thinking among stakeholders versus the steady dream of owning a house in sub- and exurban areas. Additionally, the outcomes are transferred into a digital petri dish reflecting a synthetic environment with perfect conditions of growth. Hence, the generic growth elements affecting the future
R. P. Shenoy
Full Text Available Artificial Intelligence (AI, once considered as an obscure branch of computer science, is now having a growing number of adherents in a wide variety of fields. AI is particularly useful for combat automation in defence. The combined works of computer scientists and technologists and cognitive scientists have brought out for intelligent information processing knowledge is the key factor. In the last few years, AI has been tried out with a high degree of success in certain areas such as the Expert Systems and the Computer Vision Systems. Both these have great potential in target classification and identification, information fusion, multiradar Air Defence Network, C2 (Command andControl operations etc. in defence.
Full Text Available An artificial intelligence system, designed for operations in a real-world environment faces a nearly infinite set of possible performance scenarios. Designers and developers, thus, face the challenge of validating proper performance across both foreseen and unforeseen conditions, particularly when the artificial intelligence is controlling a robot that will be operating in close proximity, or may represent a danger, to humans. While the manual creation of test cases allows limited testing (perhaps ensuring that a set of foreseeable conditions trigger an appropriate response, this may be insufficient to fully characterize and validate safe system performance. An approach to validating the performance of an artificial intelligence system using a simple artificial intelligence test case producer (AITCP is presented. The AITCP allows the creation and simulation of prospective operating scenarios at a rate far exceeding that possible by human testers. Four scenarios for testing an autonomous navigation control system are presented: single actor in two-dimensional space, multiple actors in two-dimensional space, single actor in three-dimensional space, and multiple actors in three-dimensional space. The utility of using the AITCP is compared to that of human testers in each of these scenarios.
This sourcebook provides information on the developments in artificial intelligence originating in Japan. Spanning such innovations as software productivity, natural language processing, CAD, and parallel inference machines, this volume lists leading organizations conducting research or implementing AI systems, describes AI applications being pursued, illustrates current results achieved, and highlights sources reporting progress.
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
For 18 years. I have been publishing books and papers on the subject of the social implications of Artificial Intelligence (AI). This is an area which is has been, and remains, in need of more academic attention of a serious nature than it currently receives. It will be useful to attempt a working definition of the field of AI at this stage. There is a considerable amount of disagreement as to what does and does not constitute AI and this often has important consequences for discussions of...
Kumar, V; Suttner, CB
With the increasing availability of parallel machines and the raising of interest in large scale and real world applications, research on parallel processing for Artificial Intelligence (AI) is gaining greater importance in the computer science environment. Many applications have been implemented and delivered but the field is still considered to be in its infancy. This book assembles diverse aspects of research in the area, providing an overview of the current state of technology. It also aims to promote further growth across the discipline. Contributions have been grouped according to their
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr
Bozinovski, Stevo; Bozinovska, Liljana
This paper addresses the field of Artificial Intelligence, road it went so far and possible road it should go. The paper was invited by the Conference of IT Revolutions 2008, and discusses some issues not emphasized in AI trajectory so far. The recommendations are that the main focus should be personalities rather than programs or agents, that genetic environment should be introduced in reasoning about personalities, and that limbic system should be studied and modeled. Engineered Psychology is proposed as a road to go. Need for basic principles in psychology are discussed and a mathematical equation is proposed as fundamental law of engineered and human psychology.
Harmon, Laurel A.; Franklin, Robert F.
A goal of Distributed Artificial Intelligence (DAI) has been the development of heuristics for problem-solving by logically distributed components (agents). The roles of organizational structure, communication and planning in addressing the central issue of coherence are discussed in the context of representative DAI simulation systems. Despite the range of DAI research, few organizing principles have emerged. We attribute this lack to a reliance on human models of cooperative processes. As the effectiveness of the models has broken down, improvements have come through incremental, compensatory changes, rather than through the development of new models. We argue for the importance of a higher level view of distributed problem-solving.
PIYUSH M. PATEL,
Full Text Available Artificial Intelligence (AI is an emerging technology. Research in AI is focused on developing computational approaches to intelligent behavior. The computer programs with which AI could be associated are primarily processes associated with complexity, ambiguity, ndecisiveness, and uncertainty. This present paper surveys the development of a condition monitoring procedure for different types ofbearings, which involves an artificial intelligence method as well as reviewed in order to examine the capability of AI methods and techniques to effectively address various hard-to-solve design tasks and issues relating different types of bearing fault. Although this review cannot be collectively exhaustive, it may be considered as a valuable guide for researchers who are interested in the domain of AI and wish to explore the opportunities offered by fuzzy logic, artificial neural networks and genetic algorithms for further improvement of conditioning monitoring for different types of bearing under different operating conditioning. Recent trends in research on conditioning monitoring using AI for different bearing have also been included.
Colomé, Josep; Colomer, Pau; Campreciós, Jordi; Coiffard, Thierry; de Oña, Emma; Pedaletti, Giovanna; Torres, Diego F.; Garcia-Piquer, Alvaro
The Cherenkov Telescope Array (CTA) project will be the next generation ground-based very high energy gamma-ray instrument. The success of the precursor projects (i.e., HESS, MAGIC, VERITAS) motivated the construction of this large infrastructure that is included in the roadmap of the ESFRI projects since 2008. CTA is planned to start the construction phase in 2015 and will consist of two arrays of Cherenkov telescopes operated as a proposal-driven open observatory. Two sites are foreseen at the southern and northern hemispheres. The CTA observatory will handle several observation modes and will have to operate tens of telescopes with a highly efficient and reliable control. Thus, the CTA planning tool is a key element in the control layer for the optimization of the observatory time. The main purpose of the scheduler for CTA is the allocation of multiple tasks to one single array or to multiple sub-arrays of telescopes, while maximizing the scientific return of the facility and minimizing the operational costs. The scheduler considers long- and short-term varying conditions to optimize the prioritization of tasks. A short-term scheduler provides the system with the capability to adapt, in almost real-time, the selected task to the varying execution constraints (i.e., Targets of Opportunity, health or status of the system components, environment conditions). The scheduling procedure ensures that long-term planning decisions are correctly transferred to the short-term prioritization process for a suitable selection of the next task to execute on the array. In this contribution we present the constraints to CTA task scheduling that helped classifying it as a Flexible Job-Shop Problem case and finding its optimal solution based on Artificial Intelligence techniques. We describe the scheduler prototype that uses a Guarded Discrete Stochastic Neural Network (GDSN), for an easy representation of the possible long- and short-term planning solutions, and Constraint
Md. Tabrez Quasim
Full Text Available Any business enterprise must rely a lot on how well it can predict the future happenings. To cope up with the modern global customer demand, technological challenges, market competitions etc., any organization is compelled to foresee the future having maximum impact and least chances of errors. The traditional forecasting approaches have some limitations. That is why the business world is adopting the modern Artificial Intelligence based forecasting techniques. This paper has tried to present different types of forecasting and AI techniques that are useful in business forecasting. At the later stage we have also discussed the forecasting errors and the steps involved in planning the AI support system.
Russell, Stuart; Bohannon, John
From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding— and fresh concerns about where AI may lead us. One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity.
Finkelstein, Joseph; Wood, Jeffrey
Modern telemonitoring systems identify a serious patient deterioration when it already occurred. It would be much more beneficial if the upcoming clinical deterioration were identified ahead of time even before a patient actually experiences it. The goal of this study was to assess artificial intelligence approaches which potentially can be used in telemonitoring systems for advance prediction of changes in disease severity before they actually occur. The study dataset was based on daily self-reports submitted by 26 adult asthma patients during home telemonitoring consisting of 7001 records. Two classification algorithms were employed for building predictive models: naïve Bayesian classifier and support vector machines. Using a 7-day window, a support vector machine was able to predict asthma exacerbation to occur on the day 8 with the accuracy of 0.80, sensitivity of 0.84 and specificity of 0.80. Our study showed that methods of artificial intelligence have significant potential in developing individualized decision support for chronic disease telemonitoring systems.
Novatchkov, Hristo; Baca, Arnold
The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data
Novatchkov, Hristo; Baca, Arnold
The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data
Full Text Available The paper aims to investigate the relationship between the artificial intelligence as narrated by science fiction movies in the late five decades and the socio-technical imaginary related to intelligent systems.The first sci-fi movies in analysis shed away from the idea of a symbiotic interaction between humans and AI as forecast during the 1960s by informatics and AI scientists. Afterwards, from the 1970s to the 1990s, AI systems played mainly the role of mirrors for the crisis of human identity: in these narratives the AI is presented as a risk, a possible enemy for human kind. Finally, during the last twenty years, a new frontier of AI seems to emerge in the imaginary. More recent stories forecast a future in which intelligent systems try to take their own place in the human social environment.All these perspectives emerge in conjunction with innovations and technical experimentations, bringing back up the relationship between “legein” and “teukein” as theorized by Cornelius Castoriadis.
Hostetter, Carl F. (Editor)
This publication comprises the papers presented at the 1994 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/GSFC, Greenbelt, Maryland, on 10-12 May 1994. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed.
Hostetter, Carl F. (Editor)
This publication comprises the papers presented at the 1993 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, MD on May 10-13, 1993. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed.
Northeast Artificial Intelligence Consortium Annual Report 1986. Volume 4. Part A. Hierarchical Region-Based Approach to Automatic Photointerpretation. Part B. Application of AI Techniques to Image Segmentation and Region Identification
MONITORING ORGANIZATION Northeast Artificial (If applicaole)nelincCostum(AcRome Air Development Center (COCU) Inteligence Consortium (NAIC)I 6c. ADDRESS...f, Offell RADC-TR-88-1 1, Vol IV (of eight) Interim Technical ReportS June 1988 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT 1986...13441-5700 EMENT NO NO NO ACCESSION NO62702F 5 8 71 " " over) I 58 27 13 " TITLE (Include Security Classification) NORTHEAST ARTIFICIAL INTELLIGENCE
Rash, James L. (Editor)
The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition.
There is no fundamental reason why A-life couldn't simply be a branch of computer science that deals with algorithms that are inspired by, or emulate biological phenomena. However, if these are the limits we place on this field, we miss the opportunity to help advance Theoretical Biology and to contribute to a deeper understanding of the nature of life. The history of Artificial Intelligence provides a good example, in that early interest in the nature of cognition quickly was lost to the process of building tools, such as "expert systems" that, were certainly useful, but provided little insight in the nature of cognition. Based on this lesson, I will discuss criteria for increasing the biological relevance of A-life and the probability that this field may provide a theoretical foundation for Biology.
Mr. Ankush Bhatia
Full Text Available A bot in computing is an autonomous program on a network (especially the Internet which can interact with systems or users.[ Simpson, J., and Weiner, E. (1989] This document gives the description of how memory of an Artificial-Intelligence bot can be stored in an optimized way with a faster searching algorithm and how it can learn new things; the user wants the bot to learn. This paper gives the details of how a bot uses a an ordered tree data structure, called TRIE or a prefix tree to dynamically store the things it learns and what to reply when a person commands asks him something, with a little modification.
Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Özer, Özgen; Güneri, Tamer; York, Peter
Quality by design (QbD) is an essential part of the modern approach to pharmaceutical quality. This study was conducted in the framework of a QbD project involving ramipril tablets. Preliminary work included identification of the critical quality attributes (CQAs) and critical process parameters (CPPs) based on the quality target product profiles (QTPPs) using the historical data and risk assessment method failure mode and effect analysis (FMEA). Compendial and in-house specifications were selected as QTPPs for ramipril tablets. CPPs that affected the product and process were used to establish an experimental design. The results thus obtained can be used to facilitate definition of the design space using tools such as design of experiments (DoE), the response surface method (RSM) and artificial neural networks (ANNs). The project was aimed at discovering hidden knowledge associated with the manufacture of ramipril tablets using a range of artificial intelligence-based software, with the intention of establishing a multi-dimensional design space that ensures consistent product quality. At the end of the study, a design space was developed based on the study data and specifications, and a new formulation was optimized. On the basis of this formulation, a new laboratory batch formulation was prepared and tested. It was confirmed that the explored formulation was within the design space.
Vogel, Alison Andrews
Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.
Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy
The importance of human-computer system interaction problems is increasing due to the growing expectations of users on general computer systems capabilities in human work and life facilitation. Users expect system which is not only a passive tool in human hands but rather an active partner equipped with a sort of artificial intelligence, having access to large information resources, being able to adapt its behavior to the human requirements and to collaborate with the human users. This book collects examples of recent human-computer system solutions. The content of the book is divided into three parts. Part I is devoted to detection, recognition and reasoning in different circumstances and applications. Problems associated with data modeling, acquisition and mining are presented by papers collected in part II and part III is devoted to Optimization.
Kiss, Peter A.
The American Institute of Aeronautics and Astronautics has initiated a committee on standards for Artificial Intelligence. Presented are the initial efforts of one of the working groups of that committee. A candidate model is presented for the development life cycle of knowledge based systems (KBSs). The intent is for the model to be used by the aerospace community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are shown and detailed as are the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.
Full Text Available This paper provides an introduction to the most commonly used Knowledge Based Systems (KBS's called Rule Based Systems, presents some benefits of using these systems if the application warrants their attention and provides an over-view of current R&D as well as industrial systems already implemented. Areas of manUfacturing that could use KES's within the South African context are suggested. A research programme investigating the use of KBS's in robotics in progress at the University of Stellenbosch demonstrating a number of useful properties associated with programming Artificial Intelligence (AI techniques using logic programming, is discussed.
Marichal, Graciliano Nicolás; Del Castillo, María Lourdes; López, Jesús; Padrón, Isidro; Artés, Mariano
In this paper, an intelligent scheme for detecting incipient defects in spur gears is presented. In fact, the study has been undertaken to determine these defects in a single propeller system of a small-sized unmanned helicopter. It is important to remark that although the study focused on this particular system, the obtained results could be extended to other systems known as AUVs (Autonomous Unmanned Vehicles), where the usage of polymer gears in the vehicle transmission is frequent. Few studies have been carried out on these kinds of gears. In this paper, an experimental platform has been adapted for the study and several samples have been prepared. Moreover, several vibration signals have been measured and their time-frequency characteristics have been taken as inputs to the diagnostic system. In fact, a diagnostic system based on an artificial intelligence strategy has been devised. Furthermore, techniques based on several paradigms of the Artificial Intelligence (Neural Networks, Fuzzy systems and Genetic Algorithms) have been applied altogether in order to design an efficient fault diagnostic system. A hybrid Genetic Neuro-Fuzzy system has been developed, where it is possible, at the final stage of the learning process, to express the fault diagnostic system as a set of fuzzy rules. Several trials have been carried out and satisfactory results have been achieved.
Marichal, Graciliano Nicolás; Del Castillo, María Lourdes; López, Jesús; Padrón, Isidro; Artés, Mariano
In this paper, an intelligent scheme for detecting incipient defects in spur gears is presented. In fact, the study has been undertaken to determine these defects in a single propeller system of a small-sized unmanned helicopter. It is important to remark that although the study focused on this particular system, the obtained results could be extended to other systems known as AUVs (Autonomous Unmanned Vehicles), where the usage of polymer gears in the vehicle transmission is frequent. Few studies have been carried out on these kinds of gears. In this paper, an experimental platform has been adapted for the study and several samples have been prepared. Moreover, several vibration signals have been measured and their time-frequency characteristics have been taken as inputs to the diagnostic system. In fact, a diagnostic system based on an artificial intelligence strategy has been devised. Furthermore, techniques based on several paradigms of the Artificial Intelligence (Neural Networks, Fuzzy systems and Genetic Algorithms) have been applied altogether in order to design an efficient fault diagnostic system. A hybrid Genetic Neuro-Fuzzy system has been developed, where it is possible, at the final stage of the learning process, to express the fault diagnostic system as a set of fuzzy rules. Several trials have been carried out and satisfactory results have been achieved. PMID:27077868
Graciliano Nicolás Marichal
Full Text Available In this paper, an intelligent scheme for detecting incipient defects in spur gears is presented. In fact, the study has been undertaken to determine these defects in a single propeller system of a small-sized unmanned helicopter. It is important to remark that although the study focused on this particular system, the obtained results could be extended to other systems known as AUVs (Autonomous Unmanned Vehicles, where the usage of polymer gears in the vehicle transmission is frequent. Few studies have been carried out on these kinds of gears. In this paper, an experimental platform has been adapted for the study and several samples have been prepared. Moreover, several vibration signals have been measured and their time-frequency characteristics have been taken as inputs to the diagnostic system. In fact, a diagnostic system based on an artificial intelligence strategy has been devised. Furthermore, techniques based on several paradigms of the Artificial Intelligence (Neural Networks, Fuzzy systems and Genetic Algorithms have been applied altogether in order to design an efficient fault diagnostic system. A hybrid Genetic Neuro-Fuzzy system has been developed, where it is possible, at the final stage of the learning process, to express the fault diagnostic system as a set of fuzzy rules. Several trials have been carried out and satisfactory results have been achieved.
Gil, Yolanda; Greaves, Mark T.; Hendler, James; Hirsch, Hyam
Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automated language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.
Keller, Richard M.
Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.
Semeraro, Giovanni; Vargiu, Eloisa; New Challenges in Distributed Information Filtering and Retrieval : DART 2011: Revised and Invited Papers
This volume focuses on new challenges in distributed Information Filtering and Retrieval. It collects invited chapters and extended research contributions from the DART 2011 Workshop, held in Palermo (Italy), on September 2011, and co-located with the XII International Conference of the Italian Association on Artificial Intelligence. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world applications. The chapters of this book present a comprehensive review of related works and state of the art. Authors, both practitioners and researchers, shared their results in several topics such as "Multi-Agent Systems", "Natural Language Processing", "Automatic Advertisement", "Customer Interaction Analytics", "Opinion Mining".
Ahmed M. Tobal
Full Text Available In a world reached a population of six billion humans increasingly demand it for food, feed with a water shortage and the decline of agricultural land and the deterioration of the climate needs 1.5 billion hectares of agricultural land and in case of failure to combat pests needs about 4 billion hectares. Weeds represent 34% of the whole pests while insects, diseases and the deterioration of agricultural land present the remaining percentage. Weeds Identification has been one of the most interesting classification problems for Artificial Intelligence (AI and image processing. The most common case is to identify weeds within the field as they reduce the productivity and harm the existing crops. Success in this area results in an increased productivity, profitability and at the same time decreases the cost of operation. On the other hand, when AI algorithms combined with appropriate imagery tools may present the right solution to the weed identification problem. In this study, we introduce an evolutionary artificial neural network to minimize the time of classification training and minimize the error through the optimization of the neuron parameters by means of a genetic algorithm. The genetic algorithm, with its global search capability, finds the optimum histogram vectors used for network training and target testing through a fitness measure that reflects the result accuracy and avoids the trial-and-error process of estimating the network inputs according to the histogram data.
Romportl, Jan; Zackova, Eva; Beyond Artificial Intelligence : Contemplations, Expectations, Applications
Products of modern artificial intelligence (AI) have mostly been formed by the views, opinions and goals of the “insiders”, i.e. people usually with engineering background who are driven by the force that can be metaphorically described as the pursuit of the craft of Hephaestus. However, since the present-day technology allows for tighter and tighter mergence of the “natural” everyday human life with machines of immense complexity, the responsible reaction of the scientific community should be based on cautious reflection of what really lies beyond AI, i.e. on the frontiers where the tumultuous ever-growing and ever-changing cloud of AI touches the rest of the world. The chapters of this boo are based on the selected subset of the presentations that were delivered by their respective authors at the conference “Beyond AI: Interdisciplinary Aspects of Artificial Intelligence” held in Pilsen in December 2011. From its very definition, the reflection of the phenomena that lie beyond AI must be i...
Ugtakhbayar, N; Sodbileg, Sh
Many methods have been developed to secure the network infrastructure and communication over the Internet. Intrusion detection is a relatively new addition to such techniques. Intrusion detection systems (IDS) are used to find out if someone has intrusion into or is trying to get it the network. One big problem is amount of Intrusion which is increasing day by day. We need to know about network attack information using IDS, then analysing the effect. Due to the nature of IDSs which are solely signature based, every new intrusion cannot be detected; so it is important to introduce artificial intelligence (AI) methods / techniques in IDS. Introduction of AI necessitates the importance of normalization in intrusions. This work is focused on classification of AI based IDS techniques which will help better design intrusion detection systems in the future. We have also proposed a support vector machine for IDS to detect Smurf attack with much reliable accuracy.
Taylor, Duncan; Powers, David
Electropherograms are produced in great numbers in forensic DNA laboratories as part of everyday criminal casework. Before the results of these electropherograms can be used they must be scrutinised by analysts to determine what the identified data tells us about the underlying DNA sequences and what is purely an artefact of the DNA profiling process. A technique that lends itself well to such a task of classification in the face of vast amounts of data is the use of artificial neural networks. These networks, inspired by the workings of the human brain, have been increasingly successful in analysing large datasets, performing medical diagnoses, identifying handwriting, playing games, or recognising images. In this work we demonstrate the use of an artificial neural network which we train to 'read' electropherograms and show that it can generalise to unseen profiles.
Full Text Available Inverse Kinematics of robotic manipulators is a complex task. For higher degree of freedom robotic manipulators, the algebra related to traditional approaches become highly complex. This has led to the usage of artificial intelligence techniques. In this paper, the hybrid combination of Neural Networks and Fuzzy Logic Intelligent Technique has been applied for 3 degree of freedom robotic manipulator. The variations of joint angles obtained in the results show the effective implementation of artificial intelligence.
Full Text Available The article presents a review of researches in the field of Artificial Intelligence in Republic of Moldova concerning pattern recognition and also theory and applications of intellectual knowledge based systems.
Networking and Information Technology Research and Development, Executive Office of the President — Executive Summary: Artificial intelligence (AI) is a transformative technology that holds promise for tremendous societal and economic benefit. AI has the potential...
Riedl, Mark O.
Computer games play an important role in our society and motivate people to learn computer science. Since artificial intelligence is integral to most games, they can also be used to teach artificial intelligence. We introduce the Game AI Game Engine (GAIGE), a Python game engine specifically designed to teach about how AI is used in computer games. A progression of seven assignments builds toward a complete, working Multi-User Battle Arena (MOBA) game. We describe the engine, the assignments,...
Khalil, Khaled M; Nazmy, Taymour T; Salem, Abdel-Badeeh M
Crisis response poses many of the most difficult information technology in crisis management. It requires information and communication-intensive efforts, utilized for reducing uncertainty, calculating and comparing costs and benefits, and managing resources in a fashion beyond those regularly available to handle routine problems. In this paper, we explore the benefits of artificial intelligence technologies in crisis response. This paper discusses the role of artificial intelligence technologies; namely, robotics, ontology and semantic web, and multi-agent systems in crisis response.
Glauner, Patrick; State, Radu
In the domain of electrical power grids, there is a particular interest in time series analysis using artificial intelligence. Machine learning is the branch of artificial intelligence giving computers the ability to learn patterns from data without being explicitly programmed. Deep Learning is a set of cutting-edge machine learning algorithms that are inspired by how the human brain works. It allows to self-learn feature hierarchies from the data rather than modeling hand-crafted features. I...
This book presents various recent applications of Artificial Intelligence in Information and Communication Technologies such as Search and Optimization methods, Machine Learning, Data Representation and Ontologies, and Multi-agent Systems. The main aim of this book is to help Information and Communication Technologies (ICT) practitioners in managing efficiently their platforms using AI tools and methods and to provide them with sufficient Artificial Intelligence background to deal with real-life problems. .
Fernando P. Ponce
Full Text Available resumen del libro de Alonso, E. y Mondragón, E. (2011. Hershey, NY: Medical Information Science Reference. La neurociencia como disciplinapersigue el entendimiento del cerebro y su relación con el funcionamiento de la mente a través del análisis de la comprensión de la interacción de diversos procesos físicos, químicos y biológicos (Bassett & Gazzaniga, 2011. Por otra parte, numerosas disciplinasprogresivamente han realizado significativas contribuciones en esta empresa tales como la matemática, la psicología o la filosofía, entre otras. Producto de este esfuerzo, es que junto con la neurociencia tradicional han aparecido disciplinas complementarias como la neurociencia cognitiva, la neuropsicología o la neurocienciacomputacional (Bengio, 2007; Dayan & Abbott, 2005. En el contexto de la neurociencia computacional como disciplina complementaria a laneurociencia tradicional. Alonso y Mondragón (2011 editan el libroComputacional Neuroscience for Advancing Artificial Intelligence: Models, Methods and Applications.
Adams, W T; Snow, G M; Helmick, P M
The consolidated business office of the Allegheny Health Education Research Foundation (AHERF), a large integrated healthcare system based in Pittsburgh, Pennsylvania, sought to improve its cash-related business office activities by implementing an automated remittance processing system that uses artificial intelligence. The goal was to create a completely automated system whereby all monies it processed would be tracked, automatically posted, analyzed, monitored, controlled, and reconciled through a central database. Using a phased approach, the automated payment system has become the central repository for all of the remittances for seven of the hospitals in the AHERF system and has allowed for the complete integration of these hospitals' existing billing systems, document imaging system, and intranet, as well as the new automated payment posting, and electronic cash tracking and reconciling systems. For such new technology, which is designed to bring about major change, factors contributing to the project's success were adequate planning, clearly articulated objectives, marketing, end-user acceptance, and post-implementation plan revision.
Midoro, V.; And Others
Describes the theoretical framework of a research project aimed at exploring the new potentials for instructional systems offered by videodisc technology and artificial intelligence. A prototype of an intelligent tutoring system, "Earth," is described, and types of interactions in instructional systems are discussed as they relate to the learning…
Salem, Abdel-Badeeh M.
The field of Artificial Intelligence (AI) and Education has traditionally a technology-based focus, looking at the ways in which AI can be used in building intelligent educational software. In addition AI can also provide an excellent methodology for learning and reasoning from the human experiences. This paper presents the potential role of AI in…
Richer, Mark H.
Discusses: how artificial intelligence (AI) can advance education; if the future of software lies in AI; the roots of intelligent computer-assisted instruction; protocol analysis; reactive environments; LOGO programming language; student modeling and coaching; and knowledge-based instructional programs. Numerous examples of AI programs are cited.…
Full Text Available This article reviews developments in the use of Artificial Intelligence (AI in sports biomechanics over the last decade. It outlines possible uses of Expert Systems as diagnostic tools for evaluating faults in sports movements ('techniques' and presents some example knowledge rules for such an expert system. It then compares the analysis of sports techniques, in which Expert Systems have found little place to date, with gait analysis, in which they are routinely used. Consideration is then given to the use of Artificial Neural Networks (ANNs in sports biomechanics, focusing on Kohonen self-organizing maps, which have been the most widely used in technique analysis, and multi-layer networks, which have been far more widely used in biomechanics in general. Examples of the use of ANNs in sports biomechanics are presented for javelin and discus throwing, shot putting and football kicking. I also present an example of the use of Evolutionary Computation in movement optimization in the soccer throw in, which predicted an optimal technique close to that in the coaching literature. After briefly overviewing the use of AI in both sports science and biomechanics in general, the article concludes with some speculations about future uses of AI in sports biomechanics.
Full Text Available Introduction: It seems that investigating the Parameters related to sperm be useful to assess the impactof age onsemen quality.Factors such asdrug dosage, how to get and duration ofmedication use andevenphysiological conditions(sex andage of the animal alsoaffects onthe outcome. Usingthe wrongdoseor inappropriately duration of treatment, causing adverse effects on the antioxidants. The main goal of this study is Using artificial neural network techniques answer this main question that whether Selenium nanoparticles Such account of spermatozoon, percentage of stimulation, sperm viability percentage are effective on mouse sperm parameters or not ? Research method: in this study , In order to predictthenumber ofspermatozoain mice , Mousesperm viabilityand Stimulationpercentage ofMouse , Some of the important propertiessuch as mouse age and The amount of silicananoparticles presence were used As inputs to the neural network Findings. By calculating theparameters such as Matchingcoefficient, the square root oferror, etc., the accuracy and validity of results was evaluated. ANNmodeloptimizedstructure wascalculatedfor predicting the count of spermatozoon, percentage of stimulation, sperm viability percentage withformat2: 7: 1, 6: 2: 1and2: 7: 1respectively. Conclusion: The results of neural network techniques showed that antioxidant selenium nanoparticles affects the parameters of mouse sperm such as count of spermatozoon, percentage of stimulation, sperm viability percentage .
Kelkar, B.G.; Gamble, R.F.; Kerr, D.R.; Thompson, L.G.; Shenoi, S.
The primary goal of this project is to develop a user-friendly computer program to integrate geological and engineering information using Artificial Intelligence (AI) methodology. The project is restricted to fluvially dominated deltaic environments. The static information used in constructing the reservoir description includes well core and log data. Using the well core and the log data, the program identifies the marker beds, and the type of sand facies, and in turn, develops correlation's between wells. Using the correlation's and sand facies, the program is able to generate multiple realizations of sand facies and petrophysical properties at interwell locations using geostatistical techniques. The generated petrophysical properties are used as input in the next step where the production data are honored. By adjusting the petrophysical properties, the match between the simulated and the observed production rates is obtained.
Human intuition has been simulated by several research projects using artificial intelligence techniques. Most of these algorithms or models lack the ability to handle complications or diversions. Moreover, they also do not explain the factors influencing intuition and the accuracy of the results from this process. In this paper, we present a simple series based model for implementation of human-like intuition using the principles of connectivity and unknown entities. By using Poker hand datasets and Car evaluation datasets, we compare the performance of some well-known models with our intuition model. The aim of the experiment was to predict the maximum accurate answers using intuition based models. We found that the presence of unknown entities, diversion from the current problem scenario, and identifying weakness without the normal logic based execution, greatly affects the reliability of the answers. Generally, the intuition based models cannot be a substitute for the logic based mechanisms in handling su...
Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.
Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.
Atkinson, David J.; Lawson, Denise L.; James, Mark L.
A brief introduction is given to an automated system called the Spacecraft Health Automated Reasoning Prototype (SHARP). SHARP is designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real-time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. Telecommunications link analysis of the Voyager II spacecraft is the initial focus for evaluation of the prototype in a real-time operations setting during the Voyager spacecraft encounter with Neptune in August, 1989. The preliminary results of the SHARP project and plans for future application of the technology are discussed.
Ali Aytek; M Asce; Murat Alp
This study proposes an application of two techniques of artificial intelligence (AI) for rainfall–runoff modeling: the artificial neural networks (ANN) and the evolutionary computation (EC). Two different ANN techniques, the feed forward back propagation (FFBP) and generalized regression neural network (GRNN) methods are compared with one EC method, Gene Expression Programming (GEP) which is a new evolutionary algorithm that evolves computer programs. The daily hydrometeorological data of three rainfall stations and one streamflow station for Juniata River Basin in Pennsylvania state of USA are taken into consideration in the model development. Statistical parameters such as average, standard deviation, coefficient of variation, skewness, minimum and maximum values, as well as criteria such as mean square error (MSE) and determination coefficient (2) are used to measure the performance of the models. The results indicate that the proposed genetic programming (GP) formulation performs quite well compared to results obtained by ANNs and is quite practical for use. It is concluded from the results that GEP can be proposed as an alternative to ANN models.
models of intelligence that will readily yield a NIM. Why not use linear systems theory as a model for a NIM? The successes of traditional linear...intelligence would be easily perceived by all. 1.5 The nature of a NIM Perhaps the solution is not in an analogy to linear systems theory , as has
Dash, Subhransu; Panigrahi, Bijaya
The book is a collection of high-quality peer-reviewed research papers presented in Proceedings of International Conference on Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (ICAEES 2014) held at Noorul Islam Centre for Higher Education, Kumaracoil, India. These research papers provide the latest developments in the broad area of use of artificial intelligence and evolutionary algorithms in engineering systems. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.
Becker, Lee A.
Presents and develops a general model of the nature of a learning system and a classification for learning systems. Highlights include the relationship between artificial intelligence and cognitive psychology; computer-based instructional systems; intelligent instructional systems; and the role of the learner's knowledge base in an intelligent…
The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals may be considered. At a practical level, the attainment of responsibilities by artificial intelligence and robots can benefit from the established responsibilities and duties of human society, as their subsistence exists within this domain. These responsibilities can be further interpreted and crystalized through legal principles, many of which have been conserved from ancient Roman law. The ultimate and unified goal of stipulating these responsibilities resides through the advancement of mankind and the enduring preservation of the core tenets of humanity.
Ali Akbar Ziaee
Full Text Available Artificial Intelligence has the potential to empower humans through enhanced learning and performance. But if this potential is to be realized and accepted, the ethical aspects as well as the technical must be addressed. Many engineers claim that AI will be smarter than human brains, including scientific creativity, general wisdom and social skills, so we must consider it an important factor for making decisions in our social life and especially in our Islamic societies. The most important challenges will be the quality of representing the Islamic values like piety, obedience, Halal and Haram, and etc in the form of semantics. In this paper, I want to emphasize on the role of Divine Islamic values in the application of AI and discuss it according to philosophy of AI and Islamic perspective.Keywords- Value, expert, Community Development, Artificial Intelligence, Superintelligence, Friendly Artificial Intelligence
Patel, Vimla L.; Shortliffe, Edward H.; Stefanelli, Mario; Szolovits, Peter; Berthold, Michael R.; Bellazzi, Riccardo; Abu-Hanna, Ameen
Summary This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its “adolescence” (Shortliffe EH. The adolescence of AI in medicine: Will the field come of age in the ‘90s? Artificial Intelligence in Medicine 1993; 5:93–106). In this article, the discussants reflect on medical AI research during the subsequent years and attempt to characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems. PMID:18790621
Altman, R B
Advances in machine intelligence have created powerful capabilities in algorithms that find hidden patterns in data, classify objects based on their measured characteristics, and associate similar patients/diseases/drugs based on common features. However, artificial intelligence (AI) applications in medical data have several technical challenges: complex and heterogeneous datasets, noisy medical datasets, and explaining their output to users. There are also social challenges related to intellectual property, data provenance, regulatory issues, economics, and liability.
This article is a brief personal account of the past, present, and future of algorithmic randomness, emphasizing its role in inductive inference and artificial intelligence. It is written for a general audience interested in science and philosophy. Intuitively, randomness is a lack of order or predictability. If randomness is the opposite of determinism, then algorithmic randomness is the opposite of computability. Besides many other things, these concepts have been used to quantify Ockham's razor, solve the induction problem, and define intelligence.
Full Text Available The overall goal of the present study was to illustrate the potential of artificial intelligence (AI techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice.
Artificial intelligence can make numerous contributions to synthetic biology. I would like to suggest three that are related to the past, present and future of artificial intelligence. From the past, works in biology and artificial systems by Turing and von Neumann prove highly interesting to explore within the new framework of synthetic biology, especially with regard to the notions of self-modification and self-replication and their links to emergence and the bottom-up approach. The current epistemological inquiry into emergence and research on swarm intelligence, superorganisms and biologically inspired cognitive architecture may lead to new achievements on the possibilities of synthetic biology in explaining cognitive processes. Finally, the present-day discussion on the future of artificial intelligence and the rise of superintelligence may point to some research trends for the future of synthetic biology and help to better define the boundary of notions such as "life", "cognition", "artificial" and "natural", as well as their interconnections in theoretical synthetic biology.
Full Text Available Networking has become the most integral part of our cyber society. Everyone wants to connect themselves with each other. With the advancement of network technology, we find this most vulnerable to breach and take information and once information reaches to the wrong hands it can do terrible things. During recent years, number of attacks on networks have been increased which drew the attention of many researchers on this field. There have been many researches on intrusion detection lately. Many methods have been devised which are really very useful but they can only detect the attacks which already took place. These methods will always fail whenever there is a foreign attack which is not famous or which is new to the networking world. In order to detect new intrusions in the network, researchers have devised artificial intelligence technique for Intrusion detection prevention system. In this paper we are going to cover what types evolutionary techniques have been devised and their significance and modification.
Full Text Available The article discusses some paradigms of artificial intelligence in the context of their applications in computer financial systems. The proposed approach has a significant po-tential to increase the competitiveness of enterprises, including financial institutions. However, it requires the effective use of supercomputers, grids and cloud computing. A reference is made to the computing environment for Bitcoin. In addition, we characterized genetic programming and artificial neural networks to prepare investment strategies on the stock exchange market.
Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Molino, Gianpaolo; Torchio, Mauro
We present GLARE, a domain-independent system for acquiring, representing and executing clinical guidelines (GL). GLARE is characterized by the adoption of Artificial Intelligence (AI) techniques in the definition and implementation of the system. First of all, a high-level and user-friendly knowledge representation language has been designed. Second, a user-friendly acquisition tool, which provides expert physicians with various forms of help, has been implemented. Third, a tool for executing GL on a specific patient has been made available. At all the levels above, advanced AI techniques have been exploited, in order to enhance flexibility and user-friendliness and to provide decision support. Specifically, this chapter focuses on the methods we have developed in order to cope with (i) automatic resource-based adaptation of GL, (ii) representation and reasoning about temporal constraints in GL, (iii) decision making support, and (iv) model-based verification. We stress that, although we have devised such techniques within the GLARE project, they are mostly system-independent, so that they might be applied to other guideline management systems.
Full Text Available The economy, which has become more information intensive, more global and more technologically dependent, is undergoing dramatic changes. The role of logistics is also becoming more and more important. In logistics, the objective of service providers is to fulfill all customers? demands while adapting to the dynamic changes of logistics networks so as to achieve a higher degree of customer satisfaction and therefore a higher return on investment. In order to provide high quality service, knowledge and information sharing among departments becomes a must in this fast changing market environment. In particular, artificial intelligence (AI technologies have achieved significant attention for enhancing the agility of supply chain management, as well as logistics operations. In this research, a multi-artificial intelligence system, named Integrated Intelligent Logistics System (IILS is proposed. The objective of IILS is to provide quality logistics solutions to achieve high levels of service performance in the logistics industry. The new feature of this agile intelligence system is characterized by the incorporation of intelligence modules through the capabilities of the case-based reasoning, multi-agent, fuzzy logic and artificial neural networks, achieving the optimization of the performance of organizations.
Nabiyev, Vasif; Karal, Hasan; Arslan, Selahattin; Erumit, Ali Kursat; Cebi, Ayca
The purpose of this study is to evaluate the artificial intelligence-based distance education system called ARTIMAT, which has been prepared in order to improve mathematical problem solving skills of the students, in terms of conceptual proficiency and ease of use with the opinions of teachers and students. The implementation has been performed…
Roll, Ido; Wylie, Ruth
The field of Artificial Intelligence in Education (AIED) has undergone significant developments over the last twenty-five years. As we reflect on our past and shape our future, we ask two main questions: What are our major strengths? And, what new opportunities lay on the horizon? We analyse 47 papers from three years in the history of the…
In education, artificial intelligence (AI) has not made much headway. In the one area where it would seem poised to lend the most benefit--assessment--the reliance on standardized tests, intensified by the demands of the No Child Left Behind Act of 2001, which holds schools accountable for whether students pass statewide exams, precludes its use.…
Burford, Anna M.; Wilson, Harold O.
This paper addresses the characteristics and applications of artificial intelligence (AI) as a subsection of computer science, and briefly describes the most common types of AI programs: expert systems, natural language, and neural networks. Following a brief presentation of the historical background, the discussion turns to an explanation of how…
Full Text Available Mathematics is a mere instance of First-Order Predicate Calculus. Therefore it belongs to applied Monotonic Logic. So, we found the limitations of classical logic reasoning and the clear advantages of Fuzzy Logic and many other new interesting tools. We present here some of the more usefulness tools of this new field of Mathematics so-called Artificial Intelligence.
Dillon, Richard W.
Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…
Top, J.L.; Akkermans, J.M.; Breedveld, P.C.
Some interdisciplinary issues concerning artificial intelligence (AI) are explored in relation to modelling in physics and engineering. A short survey is given of automated qualitative reasoning about physical systems, which in recent years has become an active research area in AI, and has been part
Levinson, Stephen E.
Revisits the classic debate on whether there can be an artificial creation that behaves and uses language with intelligence and agency. Argues that many moral and spiritual objections to this notion are not grounded either ethically or empirically. (Author/VWL)
Full Text Available In this paper, we propose two approach intelligent techniques of improvement of Direct Torque Control (DTC of Induction motor such as fuzzy logic (FL and artificial neural network (ANN, applied in switching select voltage vector .The comparison with conventional direct torque control (DTC, show that the use of the DTC_FL and DTC_ANN, reduced the torque, stator flux, and current ripples. The validity of the proposed methods is confirmed by the simulative results.
While classic artificial intelligence systems still struggle to incorporate commonsense knowledge properly, situated and embodied artificial intelligence (SEAI) aims to build animats that acquire a common-sense understanding of the world via interactions between simulated brains, bodies and environments. Neuroscientists believe that much of this common sense involves predictive models for physical activities, but the transfer of sensorimotor skill knowledge to cognition is non-trivial, indicating that SEAI may meet a daunting challenge of its own. This paper considers the neurological bases for implicit procedural and explicit declarative common sense, and the possibilities for its transfer from the former to the latter. This helps assess the prospects for SEAI eventually to surpass GOFAI (good old-fashioned AI) in the quest for generally intelligent systems.
Ibrahiem M. M-El-Emary
Full Text Available This study is concerned with a practical application of distributed artificial intelligence for managing the high data rate bus structured local area computer network that uses deterministic multiple access protocol. In the selected network that is managed using distributed artificial intelligence, the dynamic sharing of the available bandwidth among stations is achieved by forming "train to which each station may append a packet after issuing a reservation. Reservation and packet transmissions are governed by the reception of control packets (token issued by the network end stations. Managing approach that was suggested depends on using intelligent autonomous agents, which are responsible for various tasks among it: election of the end stations, the recovery from failures, and the insertion of new stations in the network. All these tasks are based on the use of special tokens.
Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: email@example.com [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)
With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)
Smith, A E; Nugent, C D; McClean, S I
The application of artificial intelligence systems is still not widespread in the medical field, however there is an increasing necessity for these to handle the surfeit of information available. One drawback to their implementation is the lack of criteria or guidelines for the evaluation of these systems. This is the primary issue in their acceptability to clinicians, who require them for decision support and therefore need evidence that these systems meet the special safety-critical requirements of the domain. This paper shows evidence that the most prevalent form of intelligent system, neural networks, is generally not being evaluated rigorously regarding classification precision. A taxonomy of the types of evaluation tests that can be carried out, to gauge inherent performance of the outputs of intelligent systems has been assembled, and the results of this presented in a clear and concise form, which should be applicable to all intelligent classifiers for medicine.
these components will be presented. 4.17 °°,. CHAPTER III FOOTNOTES 1. Arron Barr and Edward A. Feigenbaum, eds., Te Handbook gf Artificial Inteligence ol...RD-R137 205 ARTIFICIAL INTELLIGENCE APPLIED TO THE COMIMAND CONTROL i/i COMMUNICATIONS RND..(U) ARMY WAR COLL CARLISLE BARRACKS U PA J N ENVART 06...appropriate mlitary servic or *swesmment aency. ARTIFICIAL INTELLIGENCE APPLIED TO THE COMMAND, CONTROL, COMMUNICATIONS, AND INTELLIGENCE OF THE U.S. CENTRAL
Sherwood, R. L.; Chien, S.; Castano, R.; Rabideau, G.
The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting. Demonstration of these capabilities in a flight environment will open up tremendous new opportunities in planetary science, space physics, and earth science that would be unreachable without this technology.
Hussain Mutlag, Ammar; Mohamed, Azah; Shareef, Hussain
Maximum power point tracking (MPPT) is normally required to improve the performance of photovoltaic (PV) systems. This paper presents artificial intelligent-based maximum power point tracking (AI-MPPT) by considering three artificial intelligent techniques, namely, artificial neural network (ANN), adaptive neuro fuzzy inference system with seven triangular fuzzy sets (7-tri), and adaptive neuro fuzzy inference system with seven gbell fuzzy sets. The AI-MPPT is designed for the 25 SolarTIFSTF-120P6 PV panels, with the capacity of 3 kW peak. A complete PV system is modelled using 300,000 data samples and simulated in the MATLAB/SIMULINK. The AI-MPPT has been tested under real environmental conditions for two days from 8 am to 18 pm. The results showed that the ANN based MPPT gives the most accurate performance and then followed by the 7-tri-based MPPT.
Ponce-Espinosa, Hiram; Molina, Arturo
This monograph describes the synthesis and use of biologically-inspired artificial hydrocarbon networks (AHNs) for approximation models associated with machine learning and a novel computational algorithm with which to exploit them. The reader is first introduced to various kinds of algorithms designed to deal with approximation problems and then, via some conventional ideas of organic chemistry, to the creation and characterization of artificial organic networks and AHNs in particular. The advantages of using organic networks are discussed with the rules to be followed to adapt the network to its objectives. Graph theory is used as the basis of the necessary formalism. Simulated and experimental examples of the use of fuzzy logic and genetic algorithms with organic neural networks are presented and a number of modeling problems suitable for treatment by AHNs are described: · approximation; · inference; · clustering; · control; · class...
Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we s...
Encontro Portugues de Inteligencia Artificial (EPIA), Oporto, Portugal, September 1985.  N. J. Nilsson. Principles Of Artificial Intelligence. Tioga...FI1 F COPY () RADC-TR-89-259, Vol II (of twelve) Interim Report October 1969 AD-A218 154 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL...7a. NAME OF MONITORING ORGANIZATION Northeast Artificial Of p0ilcabe) Intelligence Consortium (NAIC) Rome_____ Air___ Development____Center
Full Text Available The analysis of the text content in emails, blogs, tweets, forums and other forms of textual communication constitutes what we call text analytics. Text analytics is applicable to most industries: it can help analyze millions of emails; you can analyze customers’ comments and questions in forums; you can perform sentiment analysis using text analytics by measuring positive or negative perceptions of a company, brand, or product. Text Analytics has also been called text mining, and is a subcategory of the Natural Language Processing (NLP field, which is one of the founding branches of Artificial Intelligence, back in the 1950s, when an interest in understanding text originally developed. Currently Text Analytics is often considered as the next step in Big Data analysis. Text Analytics has a number of subdivisions: Information Extraction, Named Entity Recognition, Semantic Web annotated domain’s representation, and many more. Several techniques are currently used and some of them have gained a lot of attention, such as Machine Learning, to show a semisupervised enhancement of systems, but they also present a number of limitations which make them not always the only or the best choice. We conclude with current and near future applications of Text Analytics.
Full Text Available In the last years the possibility of creating new conducting polymers exploring the concept of copolymerization (different structural monomeric units has attracted much attention from experimental and theoretical points of view. Due to the rich carbon reactivity an almost infinite number of new structures is possible and the procedure of trial and error has been the rule. In this work we have used a methodology able of generating new structures with pre-specified properties. It combines the use of negative factor counting (NFC technique with artificial intelligence methods (genetic algorithms - GAs. We present the results for a case study for poly(phenylenesulfide phenyleneamine (PPSA, a copolymer formed by combination of homopolymers: polyaniline (PANI and polyphenylenesulfide (PPS. The methodology was successfully applied to the problem of obtaining binary up to quinternary disordered polymeric alloys with a pre-specific gap value or exhibiting metallic properties. It is completely general and can be in principle adapted to the design of new classes of materials with pre-specified properties.
McManus, John W.; Goodrich, Kenneth H.
A research program investigating the use of Artificial Intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within Visual Range (WVR) air combat engagements is discussed. The application of AI programming and problem solving methods in the development and implementation of the Computerized Logic For Air-to-Air Warfare Simulations (CLAWS), a second generation TDG, is presented. The Knowledge-Based Systems used by CLAWS to aid in the tactical decision-making process are outlined in detail, and the results of tests to evaluate the performance of CLAWS versus a baseline TDG developed in FORTRAN to run in real-time in the Langley Differential Maneuvering Simulator (DMS), are presented. To date, these test results have shown significant performance gains with respect to the TDG baseline in one-versus-one air combat engagements, and the AI-based TDG software has proven to be much easier to modify and maintain than the baseline FORTRAN TDG programs. Alternate computing environments and programming approaches, including the use of parallel algorithms and heterogeneous computer networks are discussed, and the design and performance of a prototype concurrent TDG system are presented.
Susana Mejía M.
Full Text Available Emotions have been demonstrated to be an important aspect of human intelligence and to play a significant role in human decision-making processes. Emotions are not only feelings but also processes of establishing, maintaining or disrupting the relation between the organism and the environment. In the present paper, several features of social and developmental Psychology are introduced, especially concepts that are related to Theories of Emotions and the Mathematical Tools applied in psychology (i.e., Dynamic Systems and Fuzzy Logic. Later, five models that infer emotions from a single event, in AV-Space, are presented and discussed along with the finding that fuzzy logic can measure human emotional states
Nesbeth, Darren N; Zaikin, Alexey; Saka, Yasushi; Romano, M Carmen; Giuraniuc, Claudiu V; Kanakov, Oleg; Laptyeva, Tetyana
The design of synthetic gene networks (SGNs) has advanced to the extent that novel genetic circuits are now being tested for their ability to recapitulate archetypal learning behaviours first defined in the fields of machine and animal learning. Here, we discuss the biological implementation of a perceptron algorithm for linear classification of input data. An expansion of this biological design that encompasses cellular 'teachers' and 'students' is also examined. We also discuss implementation of Pavlovian associative learning using SGNs and present an example of such a scheme and in silico simulation of its performance. In addition to designed SGNs, we also consider the option to establish conditions in which a population of SGNs can evolve diversity in order to better contend with complex input data. Finally, we compare recent ethical concerns in the field of artificial intelligence (AI) and the future challenges raised by bio-artificial intelligence (BI).
Ismail, Rahmat Izaizi B.; Ismail Alnaimi, Firas B.; AL-Qrimli, Haidar F.
With increased competitiveness in power generation industries, more resources are directed in optimizing plant operation, including fault detection and diagnosis. One of the most powerful tools in faults detection and diagnosis is artificial intelligence (AI). Faults should be detected early so correct mitigation measures can be taken, whilst false alarms should be eschewed to avoid unnecessary interruption and downtime. For the last few decades there has been major interest towards intelligent condition monitoring system (ICMS) application in power plant especially with AI development particularly in artificial neural network (ANN). ANN is based on quite simple principles, but takes advantage of their mathematical nature, non-linear iteration to demonstrate powerful problem solving ability. With massive possibility and room for improvement in AI, the inspiration for researching them are apparent, and literally, hundreds of papers have been published, discussing the findings of hybrid AI for condition monitoring purposes. In this paper, the studies of ANN and genetic algorithm (GA) application will be presented.
Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.
Mikhailov, V.; Galdeano, A.; Diament, M.; Gvishiani, A.; Agayan, S.; Bogoutdinov, Sh.; Graeva, E.; Sailhac, P.
Results of Euler deconvolution strongly depend on the selection of viable solutions. Synthetic calculations using multiple causative sources show that Euler solutions clus- ter in the vicinity of causative bodies even when they do not group densely about perimeter of the bodies. We have developed a clustering technique to serve as a tool for selecting appropriate solutions. The method RODIN, employed in this study, is based on artificial intelligence and was originally designed for problems of classification of large data sets. It is based on a geometrical approach to study object concentration in a finite metric space of any dimension. The method uses a formal definition of cluster and includes free parameters that facilitate the search for clusters of given proper- ties. Test on synthetic and real data showed that the clustering technique successfully outlines causative bodies more accurate than other methods of discriminating Euler solutions. In complicated field cases such as the magnetic field in the Gulf of Saint Malo region (Brittany, France), the method provides geologically insightful solutions. Other advantages of the clustering method application are: - Clusters provide solutions associated with particular bodies or parts of bodies permitting the analysis of different clusters of Euler solutions separately. This may allow computation of average param- eters for individual causative bodies. - Those measurements of the anomalous field that yield clusters also form dense clusters themselves. The application of cluster- ing technique thus outlines areas where the influence of different causative sources is more prominent. This allows one to focus on areas for reinterpretation, using different window sizes, structural indices and so on.
Adamek, Marek; Mulawka, Jan
This paper presents the temporal logic inference engine developed in our university. It is an attempt to demonstrate implementation and practical application of temporal logic LNC developed in Cardinal Stefan Wyszynski University in Warsaw.1 The paper describes the fundamentals of LNC logic, architecture and implementation of inference engine. The practical application is shown by providing the solution for popular in Artificial Intelligence problem of Missionaries and Cannibals in terms of LNC logic. Both problem formulation and inference engine are described in details.
Ahmad M. Sarhan
Full Text Available Problem statement: The study presented a design that converted connect 4 game into a real-time game by incorporating time restraints. Approach: The design used Artificial Intelligence (AI in implementing the connect 4 game. The AI for this game was based on influence mapping. Results: A waterfall-based AI software was developed for a Connect 4 game. Conclusion: A real time connect 4 game was successfully designed and implanted with GUI using C++ programming language.
The cinematographic version of the science fiction classical book by Isaac Asimov (I, Robot) is used as a starting point, from the Artificial Intelligence perspective, in order to analyze what it is to have a self. Uniqueness or the exchange impossibility and the continuity of being one self are put forward to understand the movie's characters as well as the possibilities of feeling self conscious.
Gevarter, W. B.
An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.
Heard, Astrid E.
This paper describes the Expert System for Operations Distributed Users (EXODUS), a knowledge-based artificial intelligence system developed for the four Firing Rooms at the Kennedy Space Center. EXODUS is used by the Shuttle engineers and test conductors to monitor and control the sequence of tasks required for processing and launching Shuttle vehicles. In this paper, attention is given to the goals and the design of EXODUS, the operational requirements, and the extensibility of the technology.
Hornfeck, William A.
Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.
Zhou, Wengang; Satheesh, P
This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.
ï»¿Now in the world the technologies relating to design of systems of artificial intellect (AI) actively develop. In this paper it would be desirable to consider not tactical, but strategic problems of this process. Now not many interesting papers on this topic are available, but they exist . It is relating to a fact that most of serious experts is occupied by a solution of tactical problems and often does not think about farther prospects. However the situation at the beginning of cybernetics origin was not that. Then these problems were actively considered. Therefore we will construct our paper as a review of problems of cybernetics as they saw to participants of the symposium in 1961 . We will try to give the review of these prospects from the point of view of the up-to-date physical and cybernetic science and its last reachings.
Ge, Jianqiao; Han, Shihui
Although humans have inevitably interacted with both human and artificial intelligence in real life situations, it is unknown whether the human brain engages homologous neurocognitive strategies to cope with both forms of intelligence. To investigate this, we scanned subjects, using functional MRI, while they inferred the reasoning processes conducted by human agents or by computers. We found that the inference of reasoning processes conducted by human agents but not by computers induced increased activity in the precuneus but decreased activity in the ventral medial prefrontal cortex and enhanced functional connectivity between the two brain areas. The findings provide evidence for distinct neurocognitive strategies of taking others' perspective and inhibiting the process referenced to the self that are specific to the comprehension of human intelligence. PMID:18665211
Shorouq F. Eletter
Full Text Available Problem statement: Despite the increase in consumer loans defaults and competition in the banking market, most of the Jordanian commercial banks are reluctant to use artificial intelligence software systems for supporting loan decisions. Approach: This study developed a proposed model that identifies artificial neural network as an enabling tool for evaluating credit applications to support loan decisions in the Jordanian Commercial banks. A multi-layer feed-forward neural network with backpropagation learning algorithm was used to build up the proposed model. Results: Different representative cases of loan applications were considered based on the guidelines of different banks in Jordan, to validate the neural network model. Conclusion: The results indicated that artificial neural networks are a successful technology that can be used in loan application evaluation in the Jordanian commercial banks.
Makloski, Chelsea L
This article provides an overview of the current breeding techniques used in small animal reproduction today with an emphasis on artificial insemination techniques such as transvaginal and transcervical insemination as well as surgical deposition of semen in the uterus and oviduct. Breeding management and ovulation timing will be mentioned but are discussed in further detail in another article in this issue.
Wilson Luiz Sanvito
Full Text Available Após considerações iniciais sobre inteligência, um estudo comparativo entre inteligência biológica e inteligência artificial é feito. Os especialistas em Inteligência Artificial são de opinião que inteligência é simplesmente uma matéria de manipulação de símbolos físicos. Neste sentido, o objetivo da Inteligência Artificial é entender como a inteligência cerebral funciona em termos de conceitos e técnicas de engenharia. De modo diverso os filósofos da ciência acreditam que os computadores podem ter uma sintaxe, porém não têm uma semântica. No presente trabalho é ressaltado que o complexo cérebro/mente constitui um sistema monolítico, que funciona com funções emergentes em vários níveis de organização hierárquica. Esses níveis hierárquicos não são redutíveis um ao outro. Eles são, no mínimo, três (neuronal, funcional e semântico e funcionam dentro de um plano interacional. Do ponto de vista epistemológico, o complexo cérebro/mente se utiliza de mecanismos lógicos e não-lógicos para lidar com os problemas do dia-a-dia. A lógica é necessária para o processo do pensamento, porém não é suficiente. Ênfase é dada aos mecanismos não-lógicos (lógica nebulosa, heurística, raciocínio intuitivo, os quais permitem à mente desenvolver estratégias para encontrar soluções.After brief considerations about intelligence, a comparative study between biologic and artificial intelligence is made. The specialists in Artificial Intelligence found that intelligence is purely a matter of physical symbol manipulation. The enterprise of Artificial Intelligence aims to understand what we might call Brain Intelligence in terms of concepts and techniques of engineering. However the philosophers believed that computer-machine can have syntax, but can never have semantics. In other words, that they can follow rules, such as those of arithmetic or grammar, but not understand what to us are meanings of symbols
Devinney, E.; Guinan, E.; Bradstreet, D.; DeGeorge, M.; Giammarco, J.; Alcock, C.; Engle, S.
The explosive growth of observational capabilities and information technology over the past decade has brought astronomy to a tipping point - we are going to be deluged by a virtual fire hose (more like Niagara Falls!) of data. An important component of this deluge will be newly discovered eclipsing binary stars (EBs) and other valuable variable stars. As exploration of the Local Group Galaxies grows via current and new ground-based and satellite programs, the number of EBs is expected to grow explosively from some 10,000 today to 8 million as GAIA comes online. These observational advances will present a unique opportunity to study the properties of EBs formed in galaxies with vastly different dynamical, star formation, and chemical histories than our home Galaxy. Thus the study of these binaries (e.g., from light curve analyses) is expected to provide clues about the star formation rates and dynamics of their host galaxies as well as the possible effects of varying chemical abundance on stellar evolution and structure. Additionally, minimal-assumption-based distances to Local Group objects (and possibly 3-D mapping within these objects) shall be returned. These huge datasets of binary stars will provide tests of current theories (or suggest new theories) regarding binary star formation and evolution. However, these enormous data will far exceed the capabilities of analysis via human examination. To meet the daunting challenge of successfully mining this vast potential of EBs and variable stars for astrophysical results with minimum human intervention, we are developing new data processing techniques and methodologies. Faced with an overwhelming volume of data, our goal is to integrate technologies of Machine Learning and Pattern Processing (Artificial Intelligence [AI]) into the data processing pipelines of the major current and future ground- and space-based observational programs. Data pipelines of the future will have to carry us from observations to
QING Xiao-xia; WANG Bo; MENG De-tao
Current applications of artificial intelligence technology to wastewater treatment in China are summarized. Wastewater treatment plants use expert system mainly in the operation decision-making and fault diagnosis of system operation, use artificial neuron network for system modeling, water quality forecast and soft measure, and use fuzzy control technology for the intelligence control of wastewater treatment process. Finally, the main problems in applying artificial intelligence technology to wastewater treatment in China are analyzed.
Xing, Bo; Battle, Kimberly; Marwala, Tshildzi; Nelwamondo, Fulufhelo V
Product take-back legislation forces manufacturers to bear the costs of collection and disposal of products that have reached the end of their useful lives. In order to reduce these costs, manufacturers can consider reuse, remanufacturing and/or recycling of components as an alternative to disposal. The implementation of such alternatives usually requires an appropriate reverse supply chain management. With the concepts of reverse supply chain are gaining popularity in practice, the use of artificial intelligence approaches in these areas is also becoming popular. As a result, the purpose of this paper is to give an overview of the recent publications concerning the application of artificial intelligence techniques to reverse supply chain with emphasis on certain types of product returns.
Semalat, Ali; Bocewicz, Grzegorz; Sitek, Paweł; Nielsen, Izabela; García, Julián; Bajo, Javier
The 13th International Symposium on Distributed Computing and Artificial Intelligence 2016 (DCAI 2016) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This symposium is organized by the University of Sevilla (Spain), Osaka Institute of Technology (Japan), and the Universiti Teknologi Malaysia (Malaysia).
Una Técnica de Inteligencia Artificial para el Ajuste de uno de los Elementos que Definen una B-Spline Racional No Uniforme (NURBS A Technique of Artificial Intelligence to Fit one of the Elements that Define a Non-Uniform Rational B-Spline (NURBS
Sandra P Mateus
Full Text Available Dentro de las técnicas existentes de Inteligencia Artificial, se escogieron y adaptaron dos Redes Neuronales Artificiales (RNA para realizar el ajuste de uno de los elementos que definen una B-Spline Racional No Uniforme (NURBS y con ello obtener un modelado adecuado de la NURBS. Los elementos escogidos fueron los puntos de control. Las RNA utilizadas son las de Función de Base Radial y las de Kohonen o Mapas Auto-organizativos. Con base en el análisis de resultados y la caracterización de las RNA, la Función de Base Radial tuvo un desempeño más adecuado y óptimo para un número elevado de datos, lo cual es una desventaja de los Mapas Auto-organizativos. En este modelo se tiene que realizar procesos extras para determinar la neurona ganadora y realizar el reajuste de los pesos.In the existing techniques of Artificial Intelligence, two Artificial Neural Networks (ANN were selected and adapted to fit one of the elements that define a Non-Uniform Rational B-Spline (NURBS and thus obtaining an appropriate modeling of the NURBS. The selected elements were the checkpoints. The ANN used were the Radial Basis Function and the Kohonen model or Self-Organizing Maps. Based on the analysis of the results and characterization of the ANN the Radial Basis Function had a more appropriate and optimum performance for a large number of data, which is a disadvantage of the Self-Organizing Maps. In this model, additional processes must be done to determine the winning neuron and the weights must be refitted.
Meiring, Gys Albertus Marthinus; Myburgh, Hermanus Carel
In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.
Diprose, William; Buist, Nicholas
Artificial intelligence (AI) is a rapidly growing field with a wide range of applications. Driven by economic constraints and the potential to reduce human error, we believe that over the coming years AI will perform a significant amount of the diagnostic and treatment decision-making traditionally performed by the doctor. Humans would continue to be an important part of healthcare delivery, but in many situations, less expensive fit-for-purpose healthcare workers could be trained to 'fill the gaps' where AI are less capable. As a result, the role of the doctor as an expensive problem-solver would become redundant.
The fluctuations in Arg111, a significantly fluctuating residue in cathepsin K, were locally regulated by modifying Arg111 to Gly111. The binding properties of 15 dipeptides in the modified protein were analyzed by molecular simulations, and modeled as decision trees using artificial intelligence. The decision tree of the modified protein significantly differed from that of unmodified cathepsin K, and the Arg-to-Gly modification exerted a remarkable effect on the peptide binding properties. By locally regulating the fluctuations of a protein, we may greatly alter the original functions of the protein, enabling novel applications in several fields.
Computational simulation is an essential tool for the prediction of fluid flow. Many powerful simulation programs exist today. However, using these programs to reliably analyze fluid flow and other physical situations requires considerable human effort and expertise to set up a simulation, determine whether the output makes sense, and repeatedly run the simulation with different inputs until a satisfactory result is achieved. Automating this process is not only of considerable practical importance but will also significantly advance basic artificial intelligence (AI) research in reasoning about the physical world.
YUAN Xiao-mei; CHEN You-hua; CHEN Zhi-jiu
The artificial intelligence is applied to the simulation of the automotive air-conditioning system ( AACS )According to the system's characteristics a model of AACS, based on neural network, is developed. Different control methods of AACS are discussed through simulation based on this model. The result shows that the neural- fuzzy control is the best one compared with the on-off control and conventional fuzzy control method.It can make the compartment's temperature descend rapidly to the designed temperature and the fluctuation is small.
X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen
An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.
One of the earliest dreams of the fledgling field of artificial intelligence (AI) was to build computer programs that could play games as well as or better than the best human players. Despite early optimism in the field, the challenge proved to be surprisingly difficult. However, the 1990s saw amazing progress. Computers are now better than humans in checkers, Othello and Scrabble; are at least as good as the best humans in backgammon and chess; and are rapidly improving at hex, go, poker, and shogi. This book documents the progress made in computers playing games and puzzles. The book is the
Full Text Available Motivation plays a key role in the learning process. This paper describes an experience in the context of undergraduate teaching of Artificial Intelligence at the Computer Science Department of the Faculty of Sciences in the University of Porto. A sophisticated competition framework, which involved Prolog programmed contenders and game servers, including an appealing GUI, was developed to motivate students on the deepening of the topics covered in class. We report on the impact that such a competitive setup caused on students' commitment, which surpassed our most optimistic expectations.
Full Text Available In the work are expounded the principles and basic elements of a system of artificial intelligence. Knowledge representation develops according to the method settled for processing. A thing, a phenomenon can be determined or established by more modules subject to their state as well as the links and relations between them. The system creates a set of blocks (modules for which the concurrent work is pre- established. The volume of knowledge can be also increased without increasing the number of blocks.
Chengxian, Cai; Wei, Wang
Based on the heart rate measurement method using time-lapse image of human cheek, this paper proposes a novel measurement algorithm based on Artificial Intelligence. The algorithm combining with fuzzy logic theory acquires the heart beat point by using the defined fuzzy membership function of each sampled point. As a result, it calculates the heart rate by counting the heart beat points in a certain time period. Experiment shows said algorithm satisfies in operability, accuracy and robustness, which leads to constant practical value.
Demasie, M. P.; Muratore, J. F.
The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.
Cabral, Denise C.; Barros, Marcio P.; Lapa, Celso M.F.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: firstname.lastname@example.org; email@example.com
This work presents a preliminary study about the viability and adequacy of a new methodology for the definition of one of the main properties of ion exchange resins used for isotopic separation. Basically, the main problem is the definition of pelicule diameter in case of pelicular ion exchange resins, in order to achieve the best performance in the shortest time. In order to achieve this, a methodology was developed, based in two classic techniques of Artificial Intelligence (AI). At first, an artificial neural network (NN) was trained to map the existing relations between the nucleus radius and the resin's efficiency associated with the exchange time. Later on, a genetic algorithm (GA) was developed in order to find the best pelicule dimension. Preliminary results seem to confirm the potential of the method, and this can be used in any chemical process employing ion exchange resins. (author)
Diplomová práce se zabývá problematikou a následnou aplikací metod umělé inteligence na finančních trzích. Konkrétně se jedná o využití umělých neuronových sítí za účelem predikce hodnoty a určení trendu vývoje vybraného investičního nástroje. Vlastní řešení je vytvořeno ve vývojovém prostředí Matlab. This thesis focuses on the problem and application of artificial intelligence on the financial market. Especially, the use of artificial neural networks to forecast values and determine the t...
Chan, Kit Yan; Dillon, Tharam S
Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...
International audience; Le droit réserve-t-il une place pour les intelligences non humaines ? La réponse n’est pas aisée à apporter pour un juriste français. A partir d’observations réalisées sur l’intelligence animale et sur l’intelligence artificielle, quelques constats peuvent être dressés. Une reconnaissance juridique de ces intelligences non humaines est envisageable. Des éléments tirés des données de la science et des pratiques sociales plaident en ce sens. Néanmoins, les formes et les ...
Bueno, Elaine Inacio, E-mail: firstname.lastname@example.org [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: email@example.com [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)
Bhaskar, M; Panigrahi, Bijaya; Das, Swagatam
The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems (ICAIECES -2015) held at Velammal Engineering College (VEC), Chennai, India during 22 – 23 April 2015. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of Communication, Computing and Power Technologies.
Abbasi, Maryam; El Hanandeh, Ali
Municipal solid waste (MSW) management is a major concern to local governments to protect human health, the environment and to preserve natural resources. The design and operation of an effective MSW management system requires accurate estimation of future waste generation quantities. The main objective of this study was to develop a model for accurate forecasting of MSW generation that helps waste related organizations to better design and operate effective MSW management systems. Four intelligent system algorithms including support vector machine (SVM), adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and k-nearest neighbours (kNN) were tested for their ability to predict monthly waste generation in the Logan City Council region in Queensland, Australia. Results showed artificial intelligence models have good prediction performance and could be successfully applied to establish municipal solid waste forecasting models. Using machine learning algorithms can reliably predict monthly MSW generation by training with waste generation time series. In addition, results suggest that ANFIS system produced the most accurate forecasts of the peaks while kNN was successful in predicting the monthly averages of waste quantities. Based on the results, the total annual MSW generated in Logan City will reach 9.4×10(7)kg by 2020 while the peak monthly waste will reach 9.37×10(6)kg.
With the introduction of the power systems deregulation, many classical power transmission and distribution optimization tools became inadequate. Optimal Power Flow and Unit Commitment are common computer programs used in the regulated power industry. This work is addressing the Optimal Power Flow and Unit Commitment in the new deregulated environment. Optimal Power Flow is a high dimensional, non-linear, and non-convex optimization problem. As such, it is even now, after forty years since its introduction, a research topic without a widely accepted solution able to encompass all areas of interest. Unit Commitment is a high dimensional, combinatorial problem which should ideally include the Optimal Power Flow in its solution. The dimensionality of a typical Unit Commitment problem is so great that even the enumeration of all the combinations would take too much time for any practical purposes. This dissertation attacks the Optimal Power Flow problem using non-traditional tools from the Artificial Intelligence arena. Artificial Intelligence optimization methods are based on stochastic principles. Usually, stochastic optimization methods are successful where all other classical approaches fail. We will use Genetic Programming optimization for both Optimal Power Flow and Unit Commitment. Long processing times will also be addressed through supervised machine learning.
Full Text Available The purpose of this study is to evaluate the artificial intelligence-based distance education system called as ARTIMAT, which has been prepared in order to improve mathematical problem solving skills of the students, in terms of conceptual proficiency and ease of use with the opinions of teachers and students. The implementation has been performed with 4 teachers and 59 students in 10th grade in an Anatolian High School in Trabzon. Many institutions and organizations in the world approach seriously to distance education besides traditional education. It is inevitable to use the distance education in teaching the problem solving skills in this different dimension of the education. In the studies in Turkey and abroad in the field of mathematics teaching, problem solving skills are generally stated not to be at the desired level and often expressed to have difficulty in teaching. For this reason, difficulties of the students in problem solving have initially been evaluated and the system has been prepared utilizing artificial intelligence algorithms according to the obtained results. In the evaluation of the findings obtained from the application, it has been concluded that the system is responsive to the needs of the students and is successful in general, but that conceptual changes should be made in order that students adapt to the system quickly.
Full Text Available In this paper swarm intelligence based PID controller tuning is proposed for a nonlinear ball and hoop system. Particle swarm optimization (PSO, Artificial bee colony (ABC, Bacterial foraging optimization (BFO is some example of swarm intelligence techniques which are focused for PID controller tuning. These algorithms are also tested on perturbed ball and hoop model. Integral square error (ISE based performance index is used for finding the best possible value of controller parameters. Matlab software is used for designing the ball and hoop model. It is found that these swarm intelligence techniques have easy implementation & lesser settling & rise time compare to conventional methods.
Wallace, Scott A.; McCartney, Robert; Russell, Ingrid
Project MLeXAI [Machine Learning eXperiences in Artificial Intelligence (AI)] seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense--a simple real-time strategy game…
Atkinson, Robert D.
Given the promise that artificial intelligence (AI) holds for economic growth and societal advancement, it is critical that policymakers not only avoid retarding the progress of AI innovation, but also actively support its further development and use. This report provides a primer on artificial intelligence and debunks five prevailing myths that,…
Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML
Frank van der Velde
Full Text Available The collaboration between artificial intelligence and neuroscience can produce an understanding of the mechanisms in the brain that generate human cognition. This article reviews multidisciplinary research lines that could achieve this understanding. Artificial intelligence has an important role to play in research, because artificial intelligence focuses on the mechanisms that generate intelligence and cognition. Artificial intelligence can also benefit from studying the neural mechanisms of cognition, because this research can reveal important information about the nature of intelligence and cognition itself. I will illustrate this aspect by discussing the grounded nature of human cognition. Human cognition is perhaps unique because it combines grounded representations with computational productivity. I will illustrate that this combination requires specific neural architectures. Investigating and simulating these architectures can reveal how they are instantiated in the brain. The way these architectures implement cognitive processes could also provide answers to fundamental problems facing the study of cognition.
knowledge and meta-reasoning. In Proceedings of EP14-85 ("Encontro Portugues de Inteligencia Artificial "), pages 138-154, Oporto, Portugal, 1985.  N, J...See reverse) 7. PERFORMING ORGANIZATION NAME(S) AND ADORESS(ES) 8. PERFORMING ORGANIZATION Northeast Artificial Intelligence...ABSTRACTM-2.,-- The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and
We develop a novel Artificial Intelligence paradigm to generate autonomously artificial agents as mathematical models of behaviour. Agent/environment inputs are mapped to agent outputs via equation trees which are evolved in a manner similar to Symbolic Regression in Genetic Programming. Equations are comprised of only the four basic mathematical operators, addition, subtraction, multiplication and division, as well as input and output variables and constants. From these operations, equations can be constructed that approximate any analytic function. These Evolvable Mathematical Models (EMMs) are tested and compared to their Artificial Neural Network (ANN) counterparts on two benchmarking tasks: the double-pole balancing without velocity information benchmark and the challenging discrete Double-T Maze experiments with homing. The results from these experiments show that EMMs are capable of solving tasks typically solved by ANNs, and that they have the ability to produce agents that demonstrate learning behaviours. To further explore the capabilities of EMMs, as well as to investigate the evolutionary origins of communication, we develop NoiseWorld, an Artificial Life simulation in which interagent communication emerges and evolves from initially noncommunicating EMM-based agents. Agents develop the capability to transmit their x and y position information over a one-dimensional channel via a complex, dialogue-based communication scheme. These evolved communication schemes are analyzed and their evolutionary trajectories examined, yielding significant insight into the emergence and subsequent evolution of cooperative communication. Evolved agents from NoiseWorld are successfully transferred onto physical robots, demonstrating the transferability of EMM-based AIs from simulation into physical reality.
The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human-robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot-robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation.
Byrd, Katherine A.; Smith, Bart; Allen, Doug; Morris, Norman; Bjork, Charles A., Jr.; Deal-Giblin, Kim; Rushing, John A.
Intelligent processing techniques which can effectively combine sensor data from disparate sensors by selecting and using only the most beneficial individual sensor data is a critical element of exoatmospheric interceptor systems. A major goal of these algorithms is to provide robust discrimination against stressing threats in poor a priori conditions, and to incorporate adaptive approaches in off- nominal conditions. This paper summarizes the intelligent processing algorithms being developed, implemented and tested to intelligently fuse data from passive infrared and active LADAR sensors at the measurement, feature and decision level. These intelligent algorithms employ dynamic selection of individual sensors features and the weighting of multiple classifier decisions to optimize performance in good a priori conditions and robustness in poor a priori conditions. Features can be dynamically selected based on an estimate of the feature confidence which is determined from feature quality and weighting terms derived from the quality of sensor data and expected phenomenology. Multiple classifiers are employed which use both fuzzy logic and knowledge based approaches to fuse the sensor data and to provide a target lethality estimate. Target designation decisions can be made by fusing weighted individual classifier decisions whose output contains an estimate of the confidence of the data and the discrimination decisions. The confidence in the data and decisions can be used in real time to dynamically select different sensor feature data or to request additional sensor data on specific objects that have not been confidently identified as being lethal or non- lethal. The algorithms are implemented in C within a graphic user interface framework. Dynamic memory allocation and the sequentialy implementation of the feature algorithms are employed. The baseline set of fused sensor discrimination algorithms with intelligent processing are described in this paper. Example results
Krisler, Brian; Thome, Michael
In this paper, we will present a look at the current state of the art in human-computer interface technologies, including intelligent interactive agents, natural speech interaction and gestural based interfaces. We describe our use of these technologies to implement a cost effective, immersive experience on a public region in Second Life. We provision our Artificial Agents as a German Shepherd Dog avatar with an external rules engine controlling the behavior and movement. To interact with the avatar, we implemented a natural language and gesture system allowing the human avatars to use speech and physical gestures rather than interacting via a keyboard and mouse. The result is a system that allows multiple humans to interact naturally with AI avatars by playing games such as fetch with a flying disk and even practicing obedience exercises using voice and gesture, a natural seeming day in the park.
Silvestri, Marcello; González, Sara
The special session Decision Economics (DECON) 2016 is a scientific forum by which to share ideas, projects, researches results, models and experiences associated with the complexity of behavioral decision processes aiming at explaining socio-economic phenomena. DECON 2016 held in the University of Seville, Spain, as part of the 13th International Conference on Distributed Computing and Artificial Intelligence (DCAI) 2016. In the tradition of Herbert A. Simon’s interdisciplinary legacy, this book dedicates itself to the interdisciplinary study of decision-making in the recognition that relevant decision-making takes place in a range of critical subject areas and research fields, including economics, finance, information systems, small and international business, management, operations, and production. Decision-making issues are of crucial importance in economics. Not surprisingly, the study of decision-making has received a growing empirical research efforts in the applied economic literature over the last ...
Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Haen, Christophe; Bonaccorsi, E; Neufeld, N
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks: critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called self objectives supposed to lead to autonomic computing. In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the multi agent approach we use, coupled with an object oriented paradigm architecture should increase our learning speed a lot and highlight relations between problems.
Full Text Available The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Full Text Available Power quality is an important measure of the performance of an electrical power system. This paper discusses the topology, control strategies using artificial intelligent (AI based controllers and the performance of a unified power quality conditioner (UPQC for power quality improvement. UPQC is an integration of shunt and series compensation to limit the harmonic contamination within 5 %, the limit imposed by IEEE-519 standard. The novelty of this paper lies in the application of neural network control (NNC algorithms such as model reference control (MRC, and nonlinear autoregressive-moving average (NARMAL2 control to generate switching signals for the series compensator of the UPQC system. The entire system has been modeled using MATLAB 7.0 toolbox. Simulation results demonstrate the applicability of MRC and NARMA-L2 controllers for the control of UPQC.
Aerts, Diederik; Sozzo, Sandro
The mathematical formalism of quantum mechanics has been successfully employed in the last years to model situations in which the use of classical structures gives rise to problematical situations, and where typically quantum effects, such as 'contextuality' and 'entanglement', have been recognized. This 'Quantum Interaction Approach' is briefly reviewed in this paper focusing, in particular, on the quantum models that have been elaborated to describe how concepts combine in cognitive science, and on the ensuing identification of a quantum structure in human thought. We point out that these results provide interesting insights toward the development of a unified theory for meaning and knowledge formalization and representation. Then, we analyze the technological aspects and implications of our approach, and a particular attention is devoted to the connections with symbolic artificial intelligence, quantum computation and robotics.
Quiza, Ramón; Davim, J Paulo
Artificial intelligence (AI) techniques and the finite element method (FEM) are both powerful computing tools, which are extensively used for modeling and optimizing manufacturing processes. The combination of these tools has resulted in a new flexible and robust approach as several recent studies have shown. This book aims to review the work already done in this field as well as to expose the new possibilities and foreseen trends. The book is expected to be useful for postgraduate students and researchers, working in the area of modeling and optimization of manufacturing processes.
Straub, Jeremy; Huber, Justin
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
Mahmud Arif Pavel
Full Text Available Artificial intelligence technology has developed significantly in the past decades. Although many computational programs are able to approximate many cognitive abilities of Homo sapiens, the intelligence and sapience level of these programs are not even close to Homo sapiens. Rather than developing a computational system with the intelligent or sapient attribute, I propose to develop a system capable of performing functions that could deem as intelligent or sapient by Homo sapiens or others. I advocate converting current computational systems to educable systems that have built-in capabilities to learn and be taught with a universal programming language. The idea is that this attempt would help to attain computational actions in artificial means, which could be viewed as similar to human intelligent and sapient acts. Although this paper is seemingly speculative, some feasible elements are proposed to advance the field of Artificial Intelligence.
Lugo-Reyes, Saúl Oswaldo; Maldonado-Colín, Guadalupe; Murata, Chiharu
Medicine is one of the fields of knowledge that would most benefit from a closer interaction with Computer studies and Mathematics by optimizing complex, imperfect processes such as differential diagnosis; this is the domain of Machine Learning, a branch of Artificial Intelligence that builds and studies systems capable of learning from a set of training data, in order to optimize classification and prediction processes. In Mexico during the last few years, progress has been made on the implementation of electronic clinical records, so that the National Institutes of Health already have accumulated a wealth of stored data. For those data to become knowledge, they need to be processed and analyzed through complex statistical methods, as it is already being done in other countries, employing: case-based reasoning, artificial neural networks, Bayesian classifiers, multivariate logistic regression, or support vector machines, among other methodologies; to assist the clinical diagnosis of acute appendicitis, breast cancer and chronic liver disease, among a wide array of maladies. In this review we shift through concepts, antecedents, current examples and methodologies of machine learning-assisted clinical diagnosis.
Patricia Ferreira Ponciano Ferraz
Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.
Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh
Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.
Ernane José Xavier Costa
Full Text Available Os sistemas biológicos são surpreendentemente flexíveis pra processar informação proveniente do mundo real. Alguns organismos biológicos possuem uma unidade central de processamento denominada de cérebro. O cérebro humano consiste de 10(11 neurônios e realiza processamento inteligente de forma exata e subjetiva. A Inteligência Artificial (IA tenta trazer para o mundo da computação digital a heurística dos sistemas biológicos de várias maneiras, mas, ainda resta muito para que isso seja concretizado. No entanto, algumas técnicas como Redes neurais artificiais e lógica fuzzy tem mostrado efetivas para resolver problemas complexos usando a heurística dos sistemas biológicos. Recentemente o numero de aplicação dos métodos da IA em sistemas zootécnicos tem aumentado significativamente. O objetivo deste artigo é explicar os princípios básicos da resolução de problemas usando heurística e demonstrar como a IA pode ser aplicada para construir um sistema especialista para resolver problemas na área de zootecnia.Biological systems are surprising flexible in processing information in the real world. Some biological organisms have a central unit processing named brain. The human's brain, consisting of 10(11 neurons, realizes intelligent information processing based on exact and commonsense reasoning. Artificial intelligence (AI has been trying to implement biological intelligence in computers in various ways, but is still far from real one. Therefore, there are approaches like Symbolic AI, Artificial Neural Network and Fuzzy system that partially successful in implementing heuristic from biological intelligence. Many recent applications of these approaches show an increased interest in animal science research. The main goal of this article is to explain the principles of heuristic problem-solving approach and to demonstrate how they can be applied to building knowledge-based systems for animal science problem solving.
Nunes, Matheus Henrique; Görgens, Eric Bastos
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
Ekonomou, L. [A.S.PE.T.E. - School of Pedagogical and Technological Education, Department of Electrical Engineering Educators, N. Heraklion, 141 21 Athens (Greece)
The paper presents an alternative approach for the studies of high voltage transmission lines based on artificial intelligence and more specifically artificial neural networks (ANNs). In contrast to the existing conventional-analytical techniques and simulations which are using in the calculations empirical and/or approximating equations, this approach is based only on actual field data and actual measurements. The proposed approach is applied on high voltage transmission lines in order to calculate the lightning outages, on grounding systems in order to assess the grounding resistance and on high voltage transmission lines' polluted insulators in order to estimate the critical flashover voltage. The obtained results are very close to the actual ones for all three case studies, something which clearly implies that the ANN approach is well working and has an acceptable accuracy, constituting an additional tool of electric engineers. (author)
Matheus Henrique Nunes
Full Text Available Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
Cipresso, Pietro; Riva, Giuseppe
There is a long last tradition in Artificial Intelligence as use of Robots endowing human peculiarities, from a cognitive and emotional point of view, and not only in shape. Today Artificial Intelligence is more oriented to several form of collective intelligence, also building robot simulators (hardware or software) to deeply understand collective behaviors in human beings and society as a whole. Modeling has also been crucial in the social sciences, to understand how complex systems can arise from simple rules. However, while engineers' simulations can be performed in the physical world using robots, for social scientist this is impossible. For decades, researchers tried to improve simulations by endowing artificial agents with simple and complex rules that emulated human behavior also by using artificial intelligence (AI). To include human beings and their real intelligence within artificial societies is now the big challenge. We present an hybrid (human-artificial) platform where experiments can be performed by simulated artificial worlds in the following manner: 1) agents' behaviors are regulated by the behaviors shown in Virtual Reality involving real human beings exposed to specific situations to simulate, and 2) technology transfers these rules into the artificial world. These form a closed-loop of real behaviors inserted into artificial agents, which can be used to study real society.
Beginning with an analysis of the present situation of network information retrieval techniques, this article points out the main problems existent therein. It also probes into the prospects of applying the theory and technique of artificial intelligence to network information retrieval techniques as well as its intelligence trend.
Rajeswari P.V N
Full Text Available There are few knowledge representation (KR techniques available for efficiently representing knowledge. However, with the increase in complexity, better methods are needed. Some researchers came up with hybrid mechanisms by combining two or more methods. In an effort to construct an intelligent computer system, a primary consideration is to represent large amounts of knowledge in a way that allows effective use and efficiently organizing information to facilitate making the recommended inferences. There are merits and demerits of combinations, and standardized method of KR is needed. In this paper, various hybrid schemes of KR were explored at length and details presented.
Yang, Y.; Chen, X.
Good quality hydraulic fracture maps are heavily dependent upon the best possible velocity structure. Particle Swarm Optimization inversion scheme, an artificial intelligence technique for velocity calibration and events location could serve as a viable option, able to produce high quality data. Using perforation data to recalibrate the 1D isotropic velocity model derived from dipole sonic logs (or even without them), we are able to get the initial velocity model used for consequential events location. Velocity parameters can be inverted, as well as the thickness of the layer, through an iterative procedure. Performing inversion without integrating available data is unlikely to produce reliable results; especially if there are only one perforation shot and a single poor-layer-covered array along with low signal/noise ratio signal. The inversion method was validated via simulations and compared to the Fast Simulated Annealing approach and the Conjugate Gradient method. Further velocity model refinement can be accomplished while calculating events location during the iterative procedure minimizing the residuals from both sides. This artificial intelligence technique also displays promising application to the joint inversion of large-scale seismic activities data.
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
Mohamed A. Shahin
Geotechnical engineering deals with materials (e.g. soil and rock) that, by their very nature, exhibit varied and uncertain behavior due to the imprecise physical processes associated with the formation of these materials. Modeling the behavior of such materials in geotechnical engineering applications is complex and sometimes beyond the ability of most traditional forms of physically-based engineering methods. Artificial intelligence (AI) is becoming more popular and particularly amenable to modeling the complex behavior of most geotechnical engineering applications because it has demonstrated superior predictive ability compared to traditional methods. This paper provides state-of-the-art review of some selected AI techniques and their applications in pile foundations, and presents the salient features associated with the modeling development of these AI techniques. The paper also discusses the strength and limitations of the selected AI techniques compared to other available modeling approaches.
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
Scheidt, D. H.; Hibbitts, C. A.; Chen, M. H.; Paxton, L. J.; Bekker, D. L.
Implementing mature artificial intelligence would create the ability to significantly increase the science return from a mission, while potentially saving costs in mission and instrument operations, and solving currently intractable problems.
Hancock, Thomas M., III
This paper describes a Modular Artificial Intelligence Inference Engine System (MAIS) support tool that would provide health and status monitoring, cognitive replanning, analysis and support of on-orbit Space Station, Spacelab experiments and systems.
Simulation is viewed within the model management paradigm. Major components of simulation systems as well as elements of model management are outlined. Possible synergies of simulation model management, software engineering, artificial intelligence, and general system theories are systematized. 21 references.
With the rapid development of computer technology, artificial intelligence is applied more and more widely.This paper analyzes the specific application of the artificial intelligence from several aspects.% 随着计算机技术的快速发展，人工智能的应用越来越广泛。本文分别从几个方面对人工智能的具体应用进行分析。
This book discusses in-depth the concept of distributed artificial intelligence (DAI) and its application to cognitive communications In this book, the authors present an overview of cognitive communications, encompassing both cognitive radio and cognitive networks, and also other application areas such as cognitive acoustics. The book also explains the specific rationale for the integration of different forms of distributed artificial intelligence into cognitive communications, something which is often neglected in many forms of technical contributions available today. Furthermore, t
Gams, Matjaž; Horvat, Matej; Ožek, Matej; Luštrek, Mitja; Gradišek, Anton
We developed a new machine learning-based method in order to facilitate the manufacturing processes of pharmaceutical products, such as tablets, in accordance with the Process Analytical Technology (PAT) and Quality by Design (QbD) initiatives. Our approach combines the data, available from prior production runs, with machine learning algorithms that are assisted by a human operator with expert knowledge of the production process. The process parameters encompass those that relate to the attributes of the precursor raw materials and those that relate to the manufacturing process itself. During manufacturing, our method allows production operator to inspect the impacts of various settings of process parameters within their proven acceptable range with the purpose of choosing the most promising values in advance of the actual batch manufacture. The interaction between the human operator and the artificial intelligence system provides improved performance and quality. We successfully implemented the method on data provided by a pharmaceutical company for a particular product, a tablet, under development. We tested the accuracy of the method in comparison with some other machine learning approaches. The method is especially suitable for analyzing manufacturing processes characterized by a limited amount of data.
Lefevre, M. J.; Fisse, G.; Martin, E.; de Boissezon, H.; Galaup, M.
Over the past four years, CNES has been engaged in a major programme focusing on the development of SPOT Operational Application Projects. With a total of sixty projects now complete, we can draw a number of meaningful conclusions and identify a number of objectives to be satisfied by advanced remote sensing methodology. One of the main conclusions points to the importance of human vision in studies on natural complex space imagery. This being so, visual recognition must be one of the main phases of the ``Pilot Project for the Application of Remote Sensing to Agricultural Statistics'': only human experts have the ability to make a meaningful analysis of Spot TM imagery. Non-expert operators will not be able to manage the subsequent rational production phase alone. The first part of this paper describes an approach to the formalization and modelling of expert know-how based on the use of artificial intelligence. The second part puts forward a cooperative operator/computer system based on a cognitive structure. Our proposal comprises 1) a specific knowledge base, 2) an ergonomic interface associated with functional software that is based on automatic image enhancement coupled with perception support functions.
Wali, W A; Hassan, K H; Cullen, J D; Al-Shamma' a, A I; Shaw, A; Wylie, S R, E-mail: firstname.lastname@example.org [Built Environment and Sustainable Technologies Institute (BEST), School of the Built Environment, Faculty of Technology and Environment Liverpool John Moores University, Byrom Street, Liverpool L3 3AF (United Kingdom)
Biodiesel, an alternative diesel fuel made from a renewable source, is produced by the transesterification of vegetable oil or fat with methanol or ethanol. In order to control and monitor the progress of this chemical reaction with complex and highly nonlinear dynamics, the controller must be able to overcome the challenges due to the difficulty in obtaining a mathematical model, as there are many uncertain factors and disturbances during the actual operation of biodiesel reactors. Classical controllers show significant difficulties when trying to control the system automatically. In this paper we propose a comparison of artificial intelligent controllers, Fuzzy logic and Adaptive Neuro-Fuzzy Inference System(ANFIS) for real time control of a novel advanced biodiesel microwave reactor for biodiesel production from waste cooking oil. Fuzzy logic can incorporate expert human judgment to define the system variables and their relationships which cannot be defined by mathematical relationships. The Neuro-fuzzy system consists of components of a fuzzy system except that computations at each stage are performed by a layer of hidden neurons and the neural network's learning capability is provided to enhance the system knowledge. The controllers are used to automatically and continuously adjust the applied power supplied to the microwave reactor under different perturbations. A Labview based software tool will be presented that is used for measurement and control of the full system, with real time monitoring.
Wali, W. A.; Hassan, K. H.; Cullen, J. D.; Al-Shamma'a, A. I.; Shaw, A.; Wylie, S. R.
Biodiesel, an alternative diesel fuel made from a renewable source, is produced by the transesterification of vegetable oil or fat with methanol or ethanol. In order to control and monitor the progress of this chemical reaction with complex and highly nonlinear dynamics, the controller must be able to overcome the challenges due to the difficulty in obtaining a mathematical model, as there are many uncertain factors and disturbances during the actual operation of biodiesel reactors. Classical controllers show significant difficulties when trying to control the system automatically. In this paper we propose a comparison of artificial intelligent controllers, Fuzzy logic and Adaptive Neuro-Fuzzy Inference System(ANFIS) for real time control of a novel advanced biodiesel microwave reactor for biodiesel production from waste cooking oil. Fuzzy logic can incorporate expert human judgment to define the system variables and their relationships which cannot be defined by mathematical relationships. The Neuro-fuzzy system consists of components of a fuzzy system except that computations at each stage are performed by a layer of hidden neurons and the neural network's learning capability is provided to enhance the system knowledge. The controllers are used to automatically and continuously adjust the applied power supplied to the microwave reactor under different perturbations. A Labview based software tool will be presented that is used for measurement and control of the full system, with real time monitoring.
Haen, C.; Barra, V.; Bonaccorsi, E.; Neufeld, N.
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks: critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called “self objectives” supposed to lead to “autonomic computing”. In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the multi agent approach we use, coupled with an “object oriented paradigm” architecture should increase our learning speed a lot and highlight relations between problems.
In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed.
Castellano, Gloria; Lara, Ana; Torrens, Francisco
A set of 66 stilbenoid compounds is classified into a system of periodic properties by using a procedure based on artificial intelligence, information entropy theory. Eight characteristics in hierarchical order are used to classify structurally the stilbenoids. The former five features mark the group or column while the latter three are used to indicate the row or period in the table of periodic classification. Those stilbenoids in the same group are suggested to present similar properties. Furthermore, compounds also in the same period will show maximum resemblance. In this report, the stilbenoids in the table are related to experimental data of bioactivity and antioxidant properties available in the technical literature. It should be noted that stilbenoids with glycoxyl groups esterified with benzoic acid derivatives, in the group g11000 in the extreme right of the periodic table, show the greatest antioxidant activity as confirmed by experiments in the bibliography. Moreover, the second group from the right (g10111) contains E-piceatannol, which antioxidant activity is recognized in the literature. The experiments confirm our results of the periodic classification.
The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, but may also offer a platform on which to examine the mechanisms of biological or artificially intelligent psychiatric disease. The possibility of mental illnesses occurring in artificially intelligent individuals necessitates the consideration that at some level, they may have achieved a mental capability of consciousness, sentience and rationality such that they can subsequently become dysfunctional. The deeper philosophical understanding of these conditions in mankind and artificial intelligences might therefore offer reciprocal insights into mental health and mechanisms that may lead to the prevention of mental dysfunction.
Girela, Jose L; Gil, David; Johnsson, Magnus; Gomez-Torres, María José; De Juan, Joaquín
Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors as well as life habits may affect semen quality. In this paper we use artificial intelligence techniques in order to predict semen characteristics resulting from environmental factors, life habits, and health status, with these techniques constituting a possible decision support system that can help in the study of male fertility potential. A total of 123 young, healthy volunteers provided a semen sample that was analyzed according to the World Health Organization 2010 criteria. They also were asked to complete a validated questionnaire about life habits and health status. Sperm concentration and percentage of motile sperm were related to sociodemographic data, environmental factors, health status, and life habits in order to determine the predictive accuracy of a multilayer perceptron network, a type of artificial neural network. In conclusion, we have developed an artificial neural network that can predict the results of the semen analysis based on the data collected by the questionnaire. The semen parameter that is best predicted using this methodology is the sperm concentration. Although the accuracy for motility is slightly lower than that for concentration, it is possible to predict it with a significant degree of accuracy. This methodology can be a useful tool in early diagnosis of patients with seminal disorders or in the selection of candidates to become semen donors.
Lawson, Denise L.; James, Mark L.
The Spacecraft Health Automated Reasoning Prototype (SHARP) is a system designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. Telecommunications link analysis of the Voyager 2 spacecraft is the initial focus for the SHARP system demonstration which will occur during Voyager's encounter with the planet Neptune in August, 1989, in parallel with real time Voyager operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. A brief introduction is given to the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory. The current method of operation for monitoring the Voyager Telecommunications subsystem is described, and the difficulties associated with the existing technology are highlighted. The approach taken in the SHARP system to overcome the current limitations is also described, as well as both the conventional and artificial intelligence solutions developed in SHARP.
Nil Goksel Canbek
Full Text Available In a technology dominated world, useful and timely information can be accessed quickly via Intelligent Personal Assistants (IPAs. By the use of these assistants built into mobile operating systems, daily electronic tasks of a user can be accomplished 24/7. Such tasks like taking dictation, getting turn-by-turn directions, vocalizing email messages, reminding daily appointments, setting reminders, responding any factual questions and invoking apps can be completed by IPAs such as Apple’s Siri, Google Now and Microsoft Cortana. The mentioned assistants programmed within Artificial Intelligence (AI do create an interaction between human and computer through a natural language used in digital communication. In this regard, the overall purpose of this study is to examine the potential use of IPAs that use advanced cognitive computing technologies and Natural Language Processing (NLP for learning. To achieve this purpose, the working system of IPAs is reviewed briefly within the scope of AI that has recently become smarter to predict, comprehend and carry out multi-step and complex requests of users.
Parnell, Gregory S.; Rowell, William F.; Valusek, John R.
In recent years there has been increasing interest in applying the computer based problem solving techniques of Artificial Intelligence (AI), Operations Research (OR), and Decision Support Systems (DSS) to analyze extremely complex problems. A conceptual framework is developed for successfully integrating these three techniques. First, the fields of AI, OR, and DSS are defined and the relationships among the three fields are explored. Next, a comprehensive adaptive design methodology for AI and OR modeling within the context of a DSS is described. These observations are made: (1) the solution of extremely complex knowledge problems with ill-defined, changing requirements can benefit greatly from the use of the adaptive design process, (2) the field of DSS provides the focus on the decision making process essential for tailoring solutions to these complex problems, (3) the characteristics of AI, OR, and DSS tools appears to be converging rapidly, and (4) there is a growing need for an interdisciplinary AI/OR/DSS education.
Francaise de Robotique In- Expert Systems for Information Management dustrielle) Expert Systems in Government Symposium. Pro- Al & Society: The Jour...Newsletter Machine Intelligence. Robots: Jour. de I& Robotique Industreielle at Machine Intelligence and Pattern Recognition de la Productique
Hedir, Mehdia; Haddad, Boualem
Among the very popular Artificial Intelligence (AI) techniques, Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been retained to process Ground Echoes (GE) on meteorological radar images taken from Setif (Algeria) and Bordeaux (France) with different climates and topologies. To achieve this task, AI techniques were associated with textural approaches. We used Gray Level Co-occurrence Matrix (GLCM) and Completed Local Binary Pattern (CLBP); both methods were largely used in image analysis. The obtained results show the efficiency of texture to preserve precipitations forecast on both sites with the accuracy of 98% on Bordeaux and 95% on Setif despite the AI technique used. 98% of GE are suppressed with SVM, this rate is outperforming ANN skills. CLBP approach associated to SVM eliminates 98% of GE and preserves precipitations forecast on Bordeaux site better than on Setif's, while it exhibits lower accuracy with ANN. SVM classifier is well adapted to the proposed application since the average filtering rate is 95-98% with texture and 92-93% with CLBP. These approaches allow removing Anomalous Propagations (APs) too with a better accuracy of 97.15% with texture and SVM. In fact, textural features associated to AI techniques are an efficient tool for incoherent radars to surpass spurious echoes.
A methodology has been conceived for efficient synthesis of dynamical models that simulate common-sense decision- making processes. This methodology is intended to contribute to the design of artificial-intelligence systems that could imitate human common-sense decision making or assist humans in making correct decisions in unanticipated circumstances. This methodology is a product of continuing research on mathematical models of the behaviors of single- and multi-agent systems known in biology, economics, and sociology, ranging from a single-cell organism at one extreme to the whole of human society at the other extreme. Earlier results of this research were reported in several prior NASA Tech Briefs articles, the three most recent and relevant being Characteristics of Dynamics of Intelligent Systems (NPO -21037), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48; Self-Supervised Dynamical Systems (NPO-30634), NASA Tech Briefs, Vol. 27, No. 3 (March 2003), page 72; and Complexity for Survival of Living Systems (NPO- 43302), NASA Tech Briefs, Vol. 33, No. 7 (July 2009), page 62. The methodology involves the concepts reported previously, albeit viewed from a different perspective. One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Models of motor dynamics are used to simulate the observable behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. Autonomy is imparted to the decisionmaking process by feedback from mental to motor dynamics. This feedback replaces unavailable external information by information stored in the internal knowledge base. Representation
1978. Williams. B.C. Qualitative Analysis of MOS Circuits. Artificial Inteligence . 1984. 24.. Wilson. K. From Association to Structure. Amsterdam:North...D-A208 378 RADC-TR-88-324, Vol II (of nine), Part B Interim Report March 1969 4. NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT 1987...II (of nine), Part B 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Northeast Artificial (ff ’aolicbl
This paper explored the process of investment management in both theory and practice in China's mutual fund industry and reviewed the applications of artificial intelligence including Rule-based Expert Systems, Genetic Algorithms, Artificial Neural Network, and Support Vector Machines in financial forecasting, asset allocation and stocks selection. This study proposed the use of artificial neural network for stock selection which classifies stocks into undervalued stocks (+1), neutral st...
We are currently witnessing an evolution from building and home automation to smart homes, driven by progressing maturity of the Internet of Things and the use of artificial intelligence. However, significant technological challenges such as immature home intelligence, huge network and central...... with autonomous behavior, parallel processing, context awareness, and node communication. In particular, it introduces a novel approach to adapt and distribute the artificial intelligence to match the distributed system architecture in the smart home. The proposed solution addresses important issues such as real......-time learning, temporal detection with a high probability, battery lifetime, network communication, integration with smart objects, and embedded processing power. A multi-agent smart object model is provided to support the artificial intelligence framework with a new distributed architecture. This model focuses...
Chandra Prasetyo Utomo
Full Text Available Breast cancer is the second cause of dead among women. Early detection followed by appropriate cancer treatment can reduce the deadly risk. Medical professionals can make mistakes while identifying a disease. The help of technology such as data mining and machine learning can substantially improve the diagnosis accuracy. Artificial Neural Networks (ANN has been widely used in intelligent breast cancer diagnosis. However, the standard Gradient-Based Back Propagation Artificial Neural Networks (BP ANN has some limitations. There are parameters to be set in the beginning, long time for training process, and possibility to be trapped in local minima. In this research, we implemented ANN with extreme learning techniques for diagnosing breast cancer based on Breast Cancer Wisconsin Dataset. Results showed that Extreme Learning Machine Neural Networks (ELM ANN has better generalization classifier model than BP ANN. The development of this technique is promising as intelligent component in medical decision support systems.
Johnston, Karen L; Phillips, Margaret L; Esmen, Nurtan A; Hall, Thomas A
Estimation and Assessment of Substance Exposure (EASE) is an artificial intelligence program developed by UK's Health and Safety Executive to assess exposure. EASE computes estimated airborne concentrations based on a substance's vapor pressure and the types of controls in the work area. Though EASE is intended only to make broad predictions of exposure from occupational environments, some occupational hygienists might attempt to use EASE for individual exposure characterizations. This study investigated whether EASE would accurately predict actual sampling results from a chemical manufacturing process. Personal breathing zone time-weighted average (TWA) monitoring data for two volatile organic chemicals--a common solvent (toluene) and a specialty monomer (chloroprene)--present in this manufacturing process were compared to EASE-generated estimates. EASE-estimated concentrations for specific tasks were weighted by task durations reported in the monitoring record to yield TWA estimates from EASE that could be directly compared to the measured TWA data. Two hundred and six chloroprene and toluene full-shift personal samples were selected from eight areas of this manufacturing process. The Spearman correlation between EASE TWA estimates and measured TWA values was 0.55 for chloroprene and 0.44 for toluene, indicating moderate predictive values for both compounds. For toluene, the interquartile range of EASE estimates at least partially overlapped the interquartile range of the measured data distributions in all process areas. The interquartile range of EASE estimates for chloroprene fell above the interquartile range of the measured data distributions in one process area, partially overlapped the third quartile of the measured data in five process areas and fell within the interquartile range in two process areas. EASE is not a substitute for actual exposure monitoring. However, EASE can be used in conditions that cannot otherwise be sampled and in preliminary
Full Text Available In this article, we conducted the evaluation of artificial intelligence research from 1990–2014 by using bibliometric analysis. We introduced spatial analysis and social network analysis as geographic information retrieval methods for spatially-explicit bibliometric analysis. This study is based on the analysis of data obtained from database of the Science Citation Index Expanded (SCI-Expanded and Conference Proceedings Citation Index-Science (CPCI-S. Our results revealed scientific outputs, subject categories and main journals, author productivity and geographic distribution, international productivity and collaboration, and hot issues and research trends. The growth of article outputs in artificial intelligence research has exploded since the 1990s, along with increasing collaboration, reference, and citations. Computer science and engineering were the most frequently-used subject categories in artificial intelligence studies. The top twenty productive authors are distributed in countries with a high investment of research and development. The United States has the highest number of top research institutions in artificial intelligence, producing most single-country and collaborative articles. Although there is more and more collaboration among institutions, cooperation, especially international ones, are not highly prevalent in artificial intelligence research as expected. The keyword analysis revealed interesting research preferences, confirmed that methods, models, and application are in the central position of artificial intelligence. Further, we found interesting related keywords with high co-occurrence frequencies, which have helped identify new models and application areas in recent years. Bibliometric analysis results from our study will greatly facilitate the understanding of the progress and trends in artificial intelligence, in particular, for those researchers interested in domain-specific AI-driven problem-solving. This will be
Pathak, Lakshmi; Singh, Vineeta; Niwas, Ram; Osama, Khwaja; Khan, Saif; Haque, Shafiul; Tripathi, C K M; Mishra, B N
Cholesterol oxidase (COD) is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM), artificial neural network (ANN) and genetic algorithm (GA) have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD) and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL) obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500.
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Niwas, Ram; Osama, Khwaja; Khan, Saif; Haque, Shafiul; Tripathi, C. K. M.; Mishra, B. N.
Cholesterol oxidase (COD) is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM), artificial neural network (ANN) and genetic algorithm (GA) have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD) and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL) obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500. PMID:26368924
Bladin, Peter F
With the announcement by William Lennox at the 1935 London International Neurology Congress of the use of electroencephalography in the study of epilepsy, it became evident that a new and powerful technique for the investigation of seizures had been discovered. William Grey Walter, a young researcher finishing his post-graduate studies at Cambridge, was selected to construct and study the EEG in clinical neurology at the Maudsley Hospital, London. His hugely productive pioneering career in the use of EEG would eventually lead to groundbreaking work in other fields --the emerging sciences of robotics, cybernetics, and early work in artificial intelligence. In this historical note his pioneering work in the fields of clinical neurophysiology is documented, both in the areas of epileptology and tumour detection. His landmark contributions to clinical neurophysiology are worthy of documentation.
Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz
This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.
Pennock, K A
This assessment of artificial intelligence (AI) has been prepared for the US Army's Depot System Command (DESCOM) by Pacific Northwest Laboratory. The report describes several of the more promising AI technologies, focusing primarily on knowledge-based systems because they have been more successful in commercial applications than any other AI technique. The report also identifies potential Depot applications in the areas of procedural support, scheduling and planning, automated inspection, training, diagnostics, and robotic systems. One of the principal objectives of the report is to help decisionmakers within DESCOM to evaluate AI as a possible tool for solving individual depot problems. The report identifies a number of factors that should be considered in such evaluations. 22 refs.
Full Text Available Intelligent transportation systems (ITS are gaining acceptance around the world and the connected vehicle component of ITS is recognized as a high priority research and development area in many technologically advanced countries. Connected vehicles are expected to have the capability of safe, efficient and eco-driving operations whether these are under human control or in the adaptive machine control mode of operations. The race is on to design the capability to operate in connected traffic environment. The operational requirements can be met with cognitive vehicle design features made possible by advances in artificial intelligence-supported methodology, improved understanding of human factors, and advances in communication technology. This paper describes cognitive features and their information system requirements. The architecture of an information system is presented that supports the features of the cognitive connected vehicle. For better focus, information processing capabilities are specified and the role of Bayesian artificial intelligence is defined for data fusion. Example applications illustrate the role of information systems in integrating intelligent technology, Bayesian artificial intelligence, and abstracted human factors. Concluding remarks highlight the role of the information system and Bayesian artificial intelligence in the design of a new generation of cognitive connected vehicle.
Full Text Available The Monte Carlo simulation method for turbomachinery uncertainty analysis often requires performing a huge number of simulations, the computational cost of which can be greatly alleviated with the help of metamodeling techniques. An intensive comparative study was performed on the approximation performance of three prospective artificial intelligence metamodels, that is, artificial neural network, radial basis function, and support vector regression. The genetic algorithm was used to optimize the predetermined parameters of each metamodel for the sake of a fair comparison. Through testing on 10 nonlinear functions with different problem scales and sample sizes, the genetic algorithm–support vector regression metamodel was found more accurate and robust than the other two counterparts. Accordingly, the genetic algorithm–support vector regression metamodel was selected and combined with the Monte Carlo simulation method for the uncertainty analysis of a wind turbine airfoil under two types of surface roughness uncertainties. The results show that the genetic algorithm–support vector regression metamodel can capture well the uncertainty propagation from the surface roughness to the airfoil aerodynamic performance. This work is useful to the application of metamodeling techniques in the robust design optimization of turbomachinery.
The issues of industrial productivity and economic competitiveness are of major significance in the U.S. at present. By advancing the science of design, and by creating a broad computer-based methodology for automating the design of artifacts and of industrial processes, we can attain dramatic improvements in productivity. It is our thesis that developments in computer science, especially in Artificial Intelligence (AI) and in related areas of advanced computing, provide us with a unique opportunity to push beyond the present level of computer aided automation technology and to attain substantial advances in the understanding and mechanization of design processes. To attain these goals, we need to build on top of the present state of AI, and to accelerate research and development in areas that are especially relevant to design problems of realistic complexity. We propose an approach to the special challenges in this area, which combines 'core work' in AI with the development of systems for handling significant design tasks. We discuss the general nature of design problems, the scientific issues involved in studying them with the help of AI approaches, and the methodological/technical issues that one must face in developing AI systems for handling advanced design tasks. Looking at basic work in AI from the perspective of design automation, we identify a number of research problems that need special attention. These include finding solution methods for handling multiple interacting goals, formation problems, problem decompositions, and redesign problems; choosing representations for design problems with emphasis on the concept of a design record; and developing approaches for the acquisition and structuring of domain knowledge with emphasis on finding useful approximations to domain theories. Progress in handling these research problems will have major impact both on our understanding of design processes and their automation, and also on several fundamental questions
This book presents recently developed intelligent techniques with applications and theory in the area of engineering management. The involved applications of intelligent techniques such as neural networks, fuzzy sets, Tabu search, genetic algorithms, etc. will be useful for engineering managers, postgraduate students, researchers, and lecturers. The book has been written considering the contents of a classical engineering management book but intelligent techniques are used for handling the engineering management problem areas. This comprehensive characteristics of the book makes it an excellent reference for the solution of complex problems of engineering management. The authors of the chapters are well-known researchers with their previous works in the area of engineering management.
Full Text Available The potential applications for mobile robots are enormous. The mobile robots must quickly and robustly perform useful tasks in a previously unknown, dynamic and challenging environment. Mobile robot navigation plays a key role in all mobile robot activities and tasks such as path planning. Mobile robots are machines which navigate around their environment getting sensory information about that environment and performing actions dependent on this sensory information. Localization is basic to navigation. Various techniques have been described for estimating the orientation and positioning of a mobile robot. Navigation may be defined as the process of guiding the movement of intelligent vehicle systems from one location to another location with the support of various types of sensors to the different environments such as indoor, outdoor and other complex environments by using various navigation methods. This paper reviews the following mobile robot systems which are used in navigation for localization (1 Odometry (2 Magnetic compass (3 Active beacons (4 Global positioning system (5 Landmark navigation (6 Pattern matching.
Alan Turing pioneered many research areas such as artificial intelligence, computability, heuristics and pattern formation. Nowadays at the information age, it is hard to imagine how the world would be without computers and the Internet. Without Turing's work, especially the core concept of Turing Machine at the heart of every computer, mobile phone and microchip today, so many things on which we are so dependent would be impossible. 2012 is the Alan Turing year -- a centenary celebration of the life and work of Alan Turing. To celebrate Turing's legacy and follow the footsteps of this brilliant mind, we take this golden opportunity to review the latest developments in areas of artificial intelligence, evolutionary computation and metaheuristics, and all these areas can be traced back to Turing's pioneer work. Topics include Turing test, Turing machine, artificial intelligence, cryptography, software testing, image processing, neural networks, nature-inspired algorithms such as bat algorithm and cuckoo sear...
Place, J F; Truchaud, A; Ozawa, K; Pardue, H; Schnipelsky, P
The incorporation of information-processing technology into analytical systems in the form of standard computing software has recently been advanced by the introduction of artificial intelligence (AI), both as expert systems and as neural networks.This paper considers the role of software in system operation, control and automation, and attempts to define intelligence. AI is characterized by its ability to deal with incomplete and imprecise information and to accumulate knowledge. Expert systems, building on standard computing techniques, depend heavily on the domain experts and knowledge engineers that have programmed them to represent the real world. Neural networks are intended to emulate the pattern-recognition and parallel processing capabilities of the human brain and are taught rather than programmed. The future may lie in a combination of the recognition ability of the neural network and the rationalization capability of the expert system.In the second part of the paper, examples are given of applications of AI in stand-alone systems for knowledge engineering and medical diagnosis and in embedded systems for failure detection, image analysis, user interfacing, natural language processing, robotics and machine learning, as related to clinical laboratories.It is concluded that AI constitutes a collective form of intellectual propery, and that there is a need for better documentation, evaluation and regulation of the systems already being used in clinical laboratories.
Yaseen, Zaher Mundher; El-shafie, Ahmed; Jaafar, Othman; Afan, Haitham Abdulmohsin; Sayl, Khamis Naba
The use of Artificial Intelligence (AI) has increased since the middle of the 20th century as seen in its application in a wide range of engineering and science problems. The last two decades, for example, has seen a dramatic increase in the development and application of various types of AI approaches for stream-flow forecasting. Generally speaking, AI has exhibited significant progress in forecasting and modeling non-linear hydrological applications and in capturing the noise complexity in the dataset. This paper explores the state-of-the-art application of AI in stream-flow forecasting, focusing on defining the data-driven of AI, the advantages of complementary models, as well as the literature and their possible future application in modeling and forecasting stream-flow. The review also identifies the major challenges and opportunities for prospective research, including, a new scheme for modeling the inflow, a novel method for preprocessing time series frequency based on Fast Orthogonal Search (FOS) techniques, and Swarm Intelligence (SI) as an optimization approach.
The ideas that gave birth to the computer age. Alan Turing, pioneer of computing and WWII codebreaker, was one of the most important and influential thinkers of the twentieth century. In this volume for the first time his key writings are made available to a broad, non-specialist readership. They make fascinating reading both in their own right and for their historic significance: contemporary computational theory, cognitive science, artificial intelligence, and artificial life all spring from this ground-breaking work, which is also rich. in philosophical and logical insight. An introduction
Artificial Intelligence is a huge breakthrough technology that is changing our world. It requires some degrees of technical skills to be developed and understood, so in this book we are going to first of all define AI and categorize it with a non-technical language. We will explain how we reached this phase and what historically happened to artificial intelligence in the last century. Recent advancements in machine learning, neuroscience, and artificial intelligence technology will be addressed, and new business models introduced for and by artificial intelligence research will be analyzed. Finally, we will describe the investment landscape, through the quite comprehensive study of almost 14,000 AI companies and we will discuss important features and characteristics of both AI investors as well as investments. This is the “Internet of Thinks” era. AI is revolutionizing the world we live in. It is augmenting the human experiences, and it targets to amplify human intelligence in a future not so distant from...
Jaime Alberto Díaz Limón
Full Text Available This year on September 19th, Sony CSL, a software developer company, announced to the world, the creation of the first musical work whose ownership belongs to Artificial Intelligence. This paper analyzes the legal consequences of such a statement, and it’s conceptual and legal limits within the Copyright Universe (with fundament on International Treaties; in order to assess whether we are in presence of new legal-authorial figure that invite us to think over the subjects of protection in our laws or whether the applicable normativity may resolve these hypotheses in favor Artificial Intelligence, instead of juridical persons.
Ortiz S, J.J
In this work two techniques of artificial intelligence, neural networks and genetic algorithms were applied to a practical problem of nuclear fuel management; the determination of the optimal fuel reload for a BWR type reactor. This is an important problem in the design of the operation cycle of the reactor. As a result of the application of these techniques, comparable or even better reloads proposals than those given by expert companies in the subject were obtained. Additionally, two other simpler problems in reactor physics were solved: the determination of the axial power profile and the prediction of the value of some variables of interest at the end of the operation cycle of the reactor. Neural networks and genetic algorithms have been applied to solve many problems of engineering because of their versatility but they have been rarely used in the area of fuel management. The results obtained in this thesis indicates the convenience of undertaking further work on this area and suggest the application of these techniques of artificial intelligence to the solution of other problems in nuclear reactor physics. (Author)
Zhao; Nan; Xu; Ziliang
The artificial breeding technology for juvenile of Whitmania pigra was introduced in the paper,including selection of sites and water quality,construction of spawning pool,hatching pool and escape proof facilities,key technology of leech selection,feeding,cocoon hatching,juvenile feeding and management.
This volume introduces new approaches in intelligent control area from both the viewpoints of theory and application. It consists of eleven contributions by prominent authors from all over the world and an introductory chapter. This volume is strongly connected to another volume entitled "New Approaches in Intelligent Image Analysis" (Eds. Roumen Kountchev and Kazumi Nakamatsu). The chapters of this volume are self-contained and include summary, conclusion and future works. Some of the chapters introduce specific case studies of various intelligent control systems and others focus on intelligent theory based control techniques with applications. A remarkable specificity of this volume is that three chapters are dealing with intelligent control based on paraconsistent logics.
Full Text Available Purpose of the article: To examine suitable methods of artificial neural networks and their application in business operations, specifically to the supply chain management. The article discusses construction of an artificial neural networks model that can be used to facilitate optimization of inventory level and thus improve the ordering system and inventory management. For the data analysis from the area of wholesale trade with connecting material is used. Methodology/methods: Methods used in the paper consists especially of artificial neural networks and ANN-based modelling. For data analysis and preprocessing, MS Office Excel software is used. As an instrument for neural network forecasting MathWorks MATLAB Neural Network Tool was used. Deductive quantitative methods for research are also used. Scientific aim: The effort is directed at finding whether the method of prediction using artificial neural networks is suitable as a tool for enhancing the ordering system of an enterprise. The research also focuses on finding what architecture of the artificial neural networks model is the most suitable for subsequent prediction. Findings of the research show that artificial neural networks models can be used for inventory management and lot-sizing problem successfully. A network with the TRAINGDX training function and TANSIG transfer function and 6-8-1 architecture can be considered the most suitable for artificial neural network, as it shows the best results for subsequent prediction.. Conclusions resulting from the paper are beneficial for further research. It can be concluded that the created model of artificial neural network can be successfully used for predicting order size and therefore for improving the order cycle of an enterprise.
Potter, Scott S.; Woods, David D.
The beginning of a research effort to collect and integrate existing research findings about how to combine computer power and people is discussed, including problems and pitfalls as well as desirable features. The goal of the research is to develop guidance for the design of human interfaces with intelligent systems. Fault management tasks in NASA domains are the focus of the investigation. Research is being conducted to support the development of guidance for designers that will enable them to make human interface considerations into account during the creation of intelligent systems.
Full Text Available Trata-se de estudo multidisciplinar, cujo objetivo é a obtenção de modelo discriminatório entre diagnóstico de tumores do ângulo ponto-cerebelar (APC e de distúrbios otorrinolaringológicos. Presentemente, a realização de um acurado exame neurológico e/ou otorrinolaringológico é incapaz de firmar diagnóstico de tumor do APC, sem valer-se de exames radiológicos de alto custo (tomografia computadorizada, ressonância magnética. O modelo proposto foi obtido através da utilização de técnicas de inteligência artificial e apresentou bom nível de acurácia (88,4% no teste de novos casos, considerando-se apenas o exame clínico e sem o auxílio de exames radiológicos.We are concerned in this paper with learning classification procedures from known cases. More precisely, we provide a diagnostic model that discriminate between cerebellum-pontine angle (CPA tumors and otorhinolaryngological (ENT disorders. Usually, in order to distinguish between CPA tumors and ENT disorders one must perform clinical-neurological examination together with expensive radiological imagery (CT and MRI. The proposed model was obtained through artificial intelligence methods and presented a good accuracy level (88.4% when tested against new cases, considering only clinical examination without radiological imagery results.
Sojda, Richard S.
The number of trumpeter swans (Cygnus buccinator) breeding in the Tri-State area where Montana, Idaho, and Wyoming come together has declined to just a few hundred pairs. However, these birds are part of the Rocky Mountain Population which additionally has over 3,500 birds breeding in Alberta, British Columbia, Northwest Territories, and Yukon Territory. To a large degree, these birds seem to have abandoned traditional migratory pathways in the flyway. Waterfowl managers have been interested in decision support tools that would help them explore simulated management scenarios in their quest towards reaching population recovery and the reestablishment of traditional migratory pathways. I have developed a decision support system to assist biologists with such management, especially related to wetland ecology. Decision support systems use a combination of models, analytical techniques, and information retrieval to help develop and evaluate appropriate alternatives. Swan management is a domain that is ecologically complex, and this complexity is compounded by spatial and temporal issues. As such, swan management is an inherently distributed problem. Therefore, the ecological context for modeling swan movements in response to management actions was built as a multiagent system of interacting intelligent agents that implements a queuing model representing swan migration. These agents accessed ecological knowledge about swans, their habitats, and flyway management principles from three independent expert systems. The agents were autonomous, had some sensory capability, and could respond to changing conditions. A key problem when developing ecological decision support systems is empirically determining that the recommendations provided are valid. Because Rocky Mountain trumpeter swans have been surveyed for a long period of time, I was able to compare simulated distributions provided by the system with actual field observations across 20 areas for the period 1988
McCarthy, Tessa; Rosenblum, L. Penny; Johnson, Benny G.; Dittel, Jeffrey; Kearns, Devin M.
Introduction: This study evaluated the usability and effectiveness of an artificial intelligence Braille Tutor designed to supplement the instruction of students with visual impairments as they learned to write braille contractions. Methods: A mixed-methods design was used, which incorporated a single-subject, adapted alternating treatments design…
Stiffler, A. Kent
The general operation of KATE, an artificial intelligence controller, is outlined. A shuttle environmental control system (ECS) demonstration system for KATE is explained. The knowledge base model for this system is derived. An experimental test procedure is given to verify parameters in the model.
Raine, Roxanne B.; Akker, op den Rieks; Cai, Zhiqiang; Graesser, Arthur C.; McNamara, Danielle S.
The domain of artificial intelligence (AI) progresses with extraordinary vicissitude. Whereas prior authors have divided AI into the two categories of analysis and synthesis, Raine and op den Akker distinguish between four types of AI: that of appearance, function, simulation and interpretation. The
Pyayt, A.L.; Mokhov, I.I.; Kozionov, A.; Kusherbaeva, V.; Melnikova, N.B.; Krzhizhanovskaya, V.V.; Meijer, R.J.
We present a hybrid approach to monitoring the stability of flood defence structures equipped with sensors. This approach combines the finite element modelling with the artificial intelligence for real-time signal processing and anomaly detection. This combined method has been developed for the Urba
QIAO; Jianping; ZHU; Axing; CHEN; Yongbo; WANG; Rongxun
Artificial intelligence has been used to obtain background factors (basic environmental factors) from landslide specialists. A 3D visible evaluation map may be charted by fuzzy evaluation, and the traditional plane map may be decoded into a 3D map by using factor weight from specialists system and technology of RS and GIS for quantitative sampling of these factors.
This document contains the full and short papers on artificial intelligence in education from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a computational model for learners' motivation states in individualized tutoring system; a…
This review contains an overview of past and present trends in the application of what is called "artificial intelligence" in traditional face-to-face education and in distance education. The reviewed trends are illustrated with examples of research projects and results throughout the world. The first section of the review discusses intelligence…
McArthur, David; Lewis, Matthew; Bishary, Miriam
This report begins by summarizing current applications of ideas from artificial intelligence (Al) to education. It then uses that summary to project various future applications of Al--and advanced technology in general--to education, as well as highlighting problems that will confront the wide scale implementation of these technologies in the…
Discussion of the possibilities of introducing artificial intelligence (AI) into the undergraduate curriculum highlights the introduction of AI in an introduction to information processing course for business students at George Washington University. Topics discussed include robotics, expert systems prototyping in class, and the interdisciplinary…
Gunning, David; PARC; Yeh, Peter Z.; Nuance Communications
This issue features expanded versions of articles selected from the 2015 AAAI Conference on Innovative Applications of Artificial Intelligence held in Austin, Texas. We present a selection of four articles describing deployed applications plus two more articles that discuss work on emerging applications.
Porter, Bruce; Cheetham, William
We are very pleased to republish here extended versions of a sample of the papers drawn from the Innovative Applications of Artificial Intelligence Conference (IAAI-06), which was held July 17-20, 2006, in Boston, Massachusetts. Three of these articles describe deployed applications and two describe emerging applications.
Atkinson, D. J.
A view of the capabilities and areas of artificial intelligence research which are required for autonomous space telerobotics extending through the year 2000 is given. In the coming years, JPL will be conducting directed research to achieve these capabilities, as well as drawing heavily on collaborative efforts conducted with other research laboratories.
A course was developed to introduce students at a community college to four major areas of emphasis in emerging technologies: FORTH programming language, elementary electronic theory, robotics, and artificial intelligence. After a needs assessment indicated the importance of such a course, a pretest focusing on the four areas was given to students…
Pijls, Fieny; And Others
Discusses grammar and spelling instruction in The Netherlands for students aged 10-15 and describes an intelligent computer-assisted instructional environment consisting of a linguistic expert system, a didactic module, and a student interface. Three prototypes are described: BOUWSTEEN and COGO for analyzing sentences, and TDTDT for conjugating…
Stubbs, Malcolm; Piddock, Peter
Discussion of intelligent computer assisted learning (CAL) systems considers both those that offer natural language communication to the user and those that are adaptive, generative, or self-improving. Current interest in student-built learning environments (exemplified by work with LOGO and PROLOG) is examined, and obstacles to future intelligent…
Full Text Available Cholesterol oxidase (COD is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM, artificial neural network (ANN and genetic algorithm (GA have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500.
Jahidin, A H; Megat Ali, M S A; Taib, M N; Tahir, N Md; Yassin, I M; Lias, S
This paper elaborates on the novel intelligence assessment method using the brainwave sub-band power ratio features. The study focuses only on the left hemisphere brainwave in its relaxed state. Distinct intelligence quotient groups have been established earlier from the score of the Raven Progressive Matrices. Sub-band power ratios are calculated from energy spectral density of theta, alpha and beta frequency bands. Synthetic data have been generated to increase dataset from 50 to 120. The features are used as input to the artificial neural network. Subsequently, the brain behaviour model has been developed using an artificial neural network that is trained with optimized learning rate, momentum constant and hidden nodes. Findings indicate that the distinct intelligence quotient groups can be classified from the brainwave sub-band power ratios with 100% training and 88.89% testing accuracies.
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas. PMID:28232739
Soteris A. Kalogirou, [Higher Technical Institute, Nicosia (Cyprus). Department of Mechanical Engineering
Artificial intelligence (AI) systems are widely accepted as a technology offering an alternative way to tackle complex and ill-defined problems. They can learn from examples, are fault tolerant in the sense that they are able to handle noisy and incomplete data, are able to deal with non-linear problems, and once trained can perform prediction and generalization at high speed. They have been used in diverse applications in control, robotics, pattern recognition, forecasting, medicine, power systems, manufacturing, optimization, signal processing, and social/psychological sciences. They are particularly useful in system modeling such as in implementing complex mappings and system identification. AI systems comprise areas like, expert systems, artificial neural networks, genetic algorithms, fuzzy logic and various hybrid systems, which combine two or more techniques. The major objective of this paper is to illustrate how AI techniques might play an important role in modeling and prediction of the performance and control of combustion process. The paper outlines an understanding of how AI systems operate by way of presenting a number of problems in the different disciplines of combustion engineering. The various applications of AI are presented in a thematic rather than a chronological or any other order. Problems presented include two main areas: combustion systems and internal combustion (IC) engines. Combustion systems include boilers, furnaces and incinerators modeling and emissions prediction, whereas, IC engines include diesel and spark ignition engines and gas engines modeling and control. Results presented in this paper, are testimony to the potential of AI as a design tool in many areas of combustion engineering. 109 refs., 31 figs., 11 tabs.
Kalogirou, S.A. [Higher Technical Inst., Nicosia, Cyprus (Greece). Dept. of Mechanical Engineering
Artificial intelligence (AI) systems are widely accepted as a technology offering an alternative way to tackle complex and ill-defined problems. They can learn from examples, are fault tolerant in the sense that they are able to handle noisy and incomplete data, are able to deal with non-linear problems, and once trained can perform prediction and generalization at high speed. They have been used in diverse applications in control, robotics, pattern recognition, forecasting, medicine, power systems, manufacturing, optimization, signal processing, and social/psychological sciences. They are particularly useful in system modeling such as in implementing complex mappings and system identification. Al systems comprise areas like, expert systems, artificial neural networks, genetic algorithms, fuzzy logic and various hybrid systems, which combine two or more techniques. The major objective of this paper is to illustrate how Al techniques might play an important role in modeling and prediction of the performance and control of combustion process. The paper outlines an understanding of how AI systems operate by way of presenting a number of problems in the different disciplines of combustion engineering. The various applications of AI are presented in a thematic rather than a chronological or any other order. Problems presented include two main areas: combustion systems and internal combustion (IC) engines. Combustion systems include boilers, furnaces and incinerators modeling and emissions prediction, whereas, IC engines include diesel and spark ignition engines and gas engines modeling and control. Results presented in this paper, are testimony to the potential of Al as a design tool in many areas of combustion engineering. (author)
Full Text Available In supervised learning-based classification, ensembles have been successfully employed to different application domains. In the literature, many researchers have proposed different ensembles by considering different combination methods, training datasets, base classifiers, and many other factors. Artificial-intelligence-(AI- based techniques play prominent role in development of ensemble for intrusion detection (ID and have many benefits over other techniques. However, there is no comprehensive review of ensembles in general and AI-based ensembles for ID to examine and understand their current research status to solve the ID problem. Here, an updated review of ensembles and their taxonomies has been presented in general. The paper also presents the updated review of various AI-based ensembles for ID (in particular during last decade. The related studies of AI-based ensembles are compared by set of evaluation metrics driven from (1 architecture & approach followed; (2 different methods utilized in different phases of ensemble learning; (3 other measures used to evaluate classification performance of the ensembles. The paper also provides the future directions of the research in this area. The paper will help the better understanding of different directions in which research of ensembles has been done in general and specifically: field of intrusion detection systems (IDSs.
Rover, Proc. 6th Int. Joint Conf. on Artiftial Intelligence, Tokyo, Japan, August 98. McCarthy, John. Ascribing Mental 1979, pp. 589-601. Qualities to...Semantics, Comunicaciones Tecnicas (in Spanish). Blue Series: monographs. Center 1 17. Nevatia, R., T.O. Binford; Structured for Research in Applied...Ron Goldman, AL Users’ Manual. + AIM-326 CS-725 136 pages, January 1979. Cost: 15.50 McCarthy, John, Ascribing Mental Qualities to Machines. This
Dregalin, A. F.; Nazyrova, R. R.
The basic problems of 'the thermodynamic intelligence' of personal computers have been outlined. The thermodynamic intellect of personal computers as a concept has been introduced to heat processes occurring in engines of flying vehicles. In particular, the thermodynamic intellect of computers is determined by the possibility of deriving formal relationships between thermodynamic functions. In chemical thermodynamics, a concept of a characteristic function has been introduced.
Julio R. Gómez Sarduy
Full Text Available Energy management systems can be improved by using artificial intelligence techniques such as neural networks and genetic algorithms for modelling and optimising equipment and system energy consumption. This paper proposes modelling ball mill consumption as used in the cement industry from field variables. The regression model was based on artificial neural networks for predicting the electricity consumption of the mill’s main drive and evaluating established consumption rate performance. This research showed the influence of the amount of pozzolanic ash, gypsum and clinker on a mill’s power consumption; the dose determined according to the model ensured minimum energy consumption using a simple genetic algorithm. The estimated savings potential from the proposed dose was 36 600 kWh / year for mill number 1, representing $5,793.78 / year and a 33,708 kg CO2 / year reduction in the environmental impact of gas left to escape.
Full Text Available In the IT industry, precisely estimate the effort of each software project the development cost and scheduleare count for much to the software company. So precisely estimation of man power seems to be gettingmore important. In the past time, the IT companies estimate the work effort of man power by humanexperts, using statistics method. However, the outcomes are always unsatisfying the management level.Recently it becomes an interesting topic if computing intelligence techniques can do better in this field. Thisresearch uses some computing intelligence techniques, such as Pearson product-moment correlationcoefficient method and one-way ANOVA method to select key factors, and K-Means clustering algorithm todo project clustering, to estimate the software project effort. The experimental result show that usingcomputing intelligence techniques to estimate the software project effort can get more precise and moreeffective estimation than using traditional human experts did.
This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...
Full Text Available This paper aims to analyze the different forms of intelligence within organizations in a systemic and inclusive vision, in order to conceptualize an integrated environment based on Distributed Artificial Intelligence (DAI and Collective Intelligence (CI. In this way we effectively shift the classical approaches of connecting people with people using collaboration tools (which allow people to work together, such as videoconferencing or email, groupware in virtual space, forums, workflow, of connecting people with a series of content management knowledge (taxonomies and documents classification, ontologies or thesauri, search engines, portals, to the current approaches of connecting people on the use (automatic of operational knowledge to solve problems and make decisions based on intellectual cooperation. The best way to use collective intelligence is based on knowledge mobilization and semantic technologies. We must not let computers to imitate people but to support people think and develop their ideas within a group. CI helps people to think together, while DAI tries to support people so as to limit human error. Within an organization, to manage CI is to combine instruments like Semantic Technologies (STs, knowledge mobilization methods for developing Knowledge Management (KM strategies, and the processes that promote connection and collaboration between individual minds in order to achieve collective objectives, to perform a task or to solve increasingly economic complex problems.
Banerjee, Amit Kumar; Ravi, Vadlamani; Murty, U S N; Sengupta, Neelava; Karuna, Batepatti
Standard molecular experimental methodologies and mathematical procedures often fail to answer many phylogeny and classification related issues. Modern artificial intelligent-based techniques, such as radial basis function, genetic algorithm, artificial neural network, and support vector machines are of ample potential in this regard. Reliance on a large number of essential parameters will aid in enhanced robustness, reliability, and better accuracy as opposed to single molecular parameter. This study was conducted with dataset of computed protein physicochemical properties belonging to 20 different bacterial genera. A total of 57 sequential and structural parameters derived from protein sequences were considered for the initial classification. Feature selection based techniques were employed to find out the most important features influencing the dataset. Various amino acids, hydrophobicity, relative sulfur percentage, and codon number were selected as important parameters during the study. Comparative analyses were performed applying RapidMiner data mining platform. Support vector machine proved to be the best method with maximum accuracy of more than 91%.
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.
de Callataÿ, A
How does the mind work? How is data stored in the brain? How does the mental world connect with the physical world? The hybrid system developed in this book shows a radically new view on the brain. Briefly, in this model memory remains permanent by changing the homeostasis rebuilding the neuronal organelles. These transformations are approximately abstracted as all-or-none operations. Thus the computer-like neural systems become plausible biological models. This illustrated book shows how artificial animals with such brains learn invariant methods of behavior control from their repeated action
Samui, Pijush; Kim, Dookie
This paper proposes to use least square support vector machine (LSSVM) and relevance vector machine (RVM) for prediction of the magnitude (M) of induced earthquakes based on reservoir parameters. Comprehensive parameter (E) and maximum reservoir depth (H) are used as input variables of the LSSVM and RVM. The output of the LSSVM and RVM is M. Equations have been presented based on the developed LSSVM and RVM. The developed RVM also gives variance of the predicted M. A comparative study has been carried out between the developed LSSVM, RVM, artificial neural network (ANN), and linear regression models. Finally, the results demonstrate the effectiveness and efficiency of the LSSVM and RVM models.
Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Kerr, D.R.; Thompson, L.G.; Shenoi, S.
The primary goal of the project is to develop a user-friendly computer program to integrate geological and engineering information using Artificial Intelligence (AI) methodology. The project is restricted to fluvially dominated deltaic environments. The static information used in constructing the reservoir description includes well core and log data. Using the well core and the log data, the program identifies the marker beds, and the type of sand facies, and in turn, develops correlations between wells. Using the correlations and sand facies, the program is able to generate multiple realizations of sand facies and petrophysical properties at interwell locations using geostatistical techniques. The generated petrophysical properties are used as input in the next step where the production data are honored. By adjusting the petrophysical properties, the match between the simulated and the observed production rates is obtained. Although all the components within the overall system are functioning, the integration of dynamic data may not be practical due to the single-phase flow limitations and the computationally intensive algorithms. The future work needs to concentrate on making the dynamic data integration computationally efficient.
Luis F. Gonzalez
Full Text Available Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV, artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Kapoor, Vinita; Bakhshi, A. K.
Using the ab initio Hartree-Fock crystal orbital results of three donor-acceptor polymers, PFUCO ([A]x), PSIFCO ([B]x) and PSIFCH ([C]x), the electronic properties of their novel quasi-one-dimensional copolymers (AmBn)x and (AmCn)x were investigated using an artificial intelligence technique, the genetic algorithm, in combination with negative factor counting and inverse iteration method. The repeat units in PFUCO consist of bifuran bridged by electron accepting groups Y (>Cdbnd O); while in PSIFCO and PSIFCH, the repeat units consist of bicyclopentadifluorosilole bridged by electron accepting groups Y (Y is >Cdbnd O in PSIFCO, and >Cdbnd CH2 in PSIFCH). The trends in the electronic properties of the copolymers (AmBn)x and (AmCn)x as a function of block sizes m and n, and arrangement of units (periodic and random) in the copolymer chain are also discussed. The results obtained are important guidelines for molecular designing of copolymers with tailor-made conduction properties.
Meng, Hui; Sheng, J.; Yang, W.; Pu, Y.
Holographic PIV (HPIV) is a promising 3D velocity field measurement technique providing high spatial-temporal resolution needed for understanding complex and turbulent flows. An HPIV system, combining in-line recording and off-axis viewing (IROV) holography and Heuristic Morphology Particle Pairing (HMPP) method, is being developed in this work. Unlike 2D PIV, HPIV instantaneously records a volume of particle images through holographic imaging. Its data processing involves special difficulties such as speckle noise, sparse pairs and large data sets. The HMPP algorithm is an adaptive parallel processing scheme applying artificial intelligence searching theory. Based on similar morphology of a particle group at successive instants separated by a small interval, HMPP matches a group of particle images between double exposures and provides velocity vectors for individual particle pairs, providing much higher spatial resolution than conventional correlation algorithm and lower measurement error caused by large velocity gradients. Taking advantages of IROV and HMPP, the system being developed appears highly promising as a practical HPIV configuration.
Full Text Available The target of the contribution is to outline possibilities of applying artificial neural networks for the prediction of mechanical steel properties after heat treatment and to judge their perspective use in this field. The achieved models enable the prediction of final mechanical material properties on the basis of decisive parameters influencing these properties. By applying artificial intelligence methods in combination with mathematic-physical analysis methods it will be possible to create facilities for designing a system of the continuous rationalization of existing and also newly developing industrial technologies.
Full Text Available This research involved developing a surgical robot assistant using an articulated PUMA robot running on a linear or nonlinear axis. The research concentrated on studying the artificial intelligence based switching computed torque controller to localization of an endoscopic tool. Results show that the switching artificial nonlinear control algorithm is capable to design a stable controller. For this system, error was used as the performance metric. Positioning of the endoscopic manipulator relative to the world coordinate frame was possible to within 0.05 inch. Error in maintaining a constant point in space is evident during repositioning however this was caused by limitations in the robot arm.
Traitement des diagraphies acoustiques. Première partie : application de techniques issues de l'intelligence artificielle au pointe des diagraphies acoustiques Full Waveform Acoustic Data Processing. Part One: an Artificial Intelligence Approach for the Picking of Waves on Full-Waveform Acoustic Data
Mari J. L.
. The different waves and the effects of interferences, lithological variations and reflections of waves, are illustrated for common-offset trace collections (Figs. 4 to 8. Full waveform acoustic data processing mainly involves wave arrival-time picking and wavefield separation. The first part of this paper is devoted to arrival-time picking. The second part will be devoted to wave separation. The third part will present a case history on the use of acoustic logs. The great amount of full-waveform sonic data leads geophysicists and log analysts to implement automatic algorithms for picking the wave arrival times. After a review of the main conventional picking methods (Figs. 9 to 12, an automatic routine based on an Artificial Intelligence approach is described. In this routine, which is a stand-alone multichannel algorithm, the reasoning of the geophysicists picking a particular wave on common-offset trace collections of fullwaveform data is expressed in the form of rules. Identification of arrivals is based on criteria of similarity of shape and lateral continuity of the waves in contiguous traces. The main parameters used are amplitude, frequency and lateral correlation. A positive or negative peak is picked, depending on the polarity of the wave. The geophysicist indicates on the first record the particular wave to be picked by giving its approximate arrival time. Then the algorithm automatically picks the wave arrival-times on all the other records. When picking is performed on one common-offset trace collection, the A algorithm is used to search for the optimal path in the graph where the nodes are extrema on the traces and rows are the possible links between the extrema on contiguous traces. The principle of the A algorithm is shown in Fig. 13 for a simple example. Since picking for one common-offset trace collection must be coherent with picking for the others, rules expressing the coherency conditions are added to the algorithm for a more robust selection of
Full Text Available This paper presents an application of Artificial Neural Network (ANN and Genetic Algorithm (GA for system identification for controller tuning in a pH process. In this paper, the ANN based approach is applied to estimate the system parameters. Once the variations in parameters are identified frequently, GA optimally tunes the controller. The simulation results show that the proposed intelligent technique is effective in identifying the parameters and has resulted in a minimum value of the Integral Square Error, peak overshoot and minimum settling time as compared to conventional methods. The experimental results show that their performance is superior and it matches favorably with the simulation results.
Liao, Pei-Hung; Hsu, Pei-Ti; Chu, William; Chu, Woei-Chyn
This study applied artificial intelligence to help nurses address problems and receive instructions through information technology. Nurses make diagnoses according to professional knowledge, clinical experience, and even instinct. Without comprehensive knowledge and thinking, diagnostic accuracy can be compromised and decisions may be delayed. We used a back-propagation neural network and other tools for data mining and statistical analysis. We further compared the prediction accuracy of the previous methods with an adaptive-network-based fuzzy inference system and the back-propagation neural network, identifying differences in the questions and in nurse satisfaction levels before and after using the nursing information system. This study investigated the use of artificial intelligence to generate nursing diagnoses. The percentage of agreement between diagnoses suggested by the information system and those made by nurses was as much as 87 percent. When patients are hospitalized, we can calculate the probability of various nursing diagnoses based on certain characteristics.
Moravčík, Matej; Schmid, Martin; Burch, Neil; Lisý, Viliam; Morrill, Dustin; Bard, Nolan; Davis, Trevor; Waugh, Kevin; Johanson, Michael; Bowling, Michael
Artificial intelligence has seen several breakthroughs in recent years, with games often serving as milestones. A common feature of these games is that players have perfect information. Poker is the quintessential game of imperfect information, and a longstanding challenge problem in artificial intelligence. We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning. In a study involving 44,000 hands of poker, DeepStack defeated with statistical significance professional poker players in heads-up no-limit Texas hold'em. The approach is theoretically sound and is shown to produce more difficult to exploit strategies than prior approaches.
Wallace, Scott A.; McCartney, Robert; Russell, Ingrid
Project MLeXAI (Machine Learning eXperiences in Artificial Intelligence (AI)) seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense - a simple real-time strategy game and Checkers - a classic turn-based board game. From the instructors' prospective, we examine aspects of design and implementation as well as the challenges and rewards of using the curricula. We explore students' responses to the projects via the results of a common survey. Finally, we compare the student perceptions from the game-based projects to non-game based projects from the first phase of Project MLeXAI.
Motta Cabrera, David Francisco
This thesis focuses on computer-related and lighting energy consumption in post-secondary educational institutions. In this respect, artificial intelligence and data association mining are proposed as tools to identify and reduce energy waste. First, an artificial intelligence-based method for forecasting computer usage is proposed. Based on the models' forecast, workstations can be turned on and off, in order to strike a balance between energy savings and user comfort. The models are evaluated on different datasets and their results compared to commercially available alternatives. Second, a data association mining-based approach is proposed to uncover possible relationships between occupancy patterns and lighting-related energy waste in classrooms. A wireless data collection system is used to log data from both lighting consumption and occupancy states during a year. Next, energy savings results of using the proposed approach are compared to those of an occupancy-activated lighting control system for classrooms.
Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic
Uganda, Tanzania, the Sudan, South Sudan, Rwanda, Kenya, Ethiopia, Egypt, DR Congo, and Burundi all make entitlement claims to the ecological system of the Nile Basin. This region is rich in resources, yet prone to interstate conflict, drought, and other vulnerabilities. Water resource conservation systems, alternative purification systems, and rainfall stimulation systems programmed by artificial intelligence can facilitate the establishment of transboundary partnerships that red...
The Twenty-Ninth AAAI Conference on Artificial Intelligence, (AAAI-15) was held in January 2015 in Austin, Texas (USA) The conference program was cochaired by Sven Koenig and Blai Bonet. This report contains reflective summaries of the main conference, the robotics program, the AI and robotics workshop, the virtual agent exhibition, the what's hot track, the competition panel, the senior member track, student and outreach activities, the student abstract and poster program, the doctoral conso...
In recent years broad community of researchers has emerged, focusing on the original ambitious goals of the AI field - the creation and study of software or hardware systems with general intelligence comparable to, and ultimately perhaps greater than, that of human beings. This paper surveys this diverse community and its progress. Approaches to defining the concept of Artificial General Intelligence (AGI) are reviewed including mathematical formalisms, engineering, and biology inspired perspectives. The spectrum of designs for AGI systems includes systems with symbolic, emergentist, hybrid and universalist characteristics. Metrics for general intelligence are evaluated, with a conclusion that, although metrics for assessing the achievement of human-level AGI may be relatively straightforward (e.g. the Turing Test, or a robot that can graduate from elementary school or university), metrics for assessing partial progress remain more controversial and problematic.
Klašnja-Milićević, Aleksandra; Ivanović, Mirjana; Budimac, Zoran; Jain, Lakhmi C
This monograph provides a comprehensive research review of intelligent techniques for personalisation of e-learning systems. Special emphasis is given to intelligent tutoring systems as a particular class of e-learning systems, which support and improve the learning and teaching of domain-specific knowledge. A new approach to perform effective personalization based on Semantic web technologies achieved in a tutoring system is presented. This approach incorporates a recommender system based on collaborative tagging techniques that adapts to the interests and level of students' knowledge. These innovations are important contributions of this monograph. Theoretical models and techniques are illustrated on a real personalised tutoring system for teaching Java programming language. The monograph is directed to, students and researchers interested in the e-learning and personalization techniques. .
Full Text Available Metabolic syndrome is worldwide public health problem and is a serious threat to people's health and lives. Understanding the relationship between metabolic syndrome and the physical symptoms is a difficult and challenging task, and few studies have been performed in this field. It is important to classify adults who are at high risk of metabolic syndrome without having to use a biochemical index and, likewise, it is important to develop technology that has a high economic rate of return to simplify the complexity of this detection. In this paper, an artificial intelligence model was developed to identify adults at risk of metabolic syndrome based on physical signs; this artificial intelligence model achieved more powerful capacity for classification compared to the PCLR (principal component logistic regression model. A case study was performed based on the physical signs data, without using a biochemical index, that was collected from the staff of Lanzhou Grid Company in Gansu province of China. The results show that the developed artificial intelligence model is an effective classification system for identifying individuals at high risk of metabolic syndrome.
Moosavi, Vahid; Malekinezhad, Hossein; Shirmohammadi, Bagher
This study was carried out to evaluate the wavelet-artificial intelligence hybrid models to produce fractional snow cover maps. At first, cloud cover was removed from MODIS data and cloud free images were produced. SVM-based binary classified ETM+ imagery were then used as reference maps in order to obtain train and test data for sub-pixel classification models. ANN and ANFIS-based modeling were performed using raw data (without wavelet-based preprocessing). In the next step, several mother wavelets and levels were used in order to decompose the original data to obtain wavelet coefficients. Then, the decomposed data were used for further modeling processes. ANN, ANFIS, wavelet-ANN and wavelet-ANFIS models were compared to evaluate the effect of wavelet transformation on the ability of artificial intelligence models. It was demonstrated that wavelet transformation as a preprocessing approach can significantly enhance the performance of ANN and ANFIS models. This study indicated an overall accuracy of 92.45% for wavelet-ANFIS model, 86.13% for wavelet-ANN, 72.23% for ANFIS model and 66.78% for ANN model. In fact, hybrid wavelet-artificial intelligence models can extract the characteristics of the original signals (i.e. model inputs) accurately through decomposing the non-stationary and complex signals into several stationary and simpler signals. The positive effect of fuzzification as well as wavelet transformation in the wavelet-ANFIS model was also confirmed.
Full Text Available Switchgear and Protection are the two vital terminology of Electrical power system. Normally the components of any switchgear needs better protection schemes to be set for a composite power system. Many explorers worked on artificial intelligent breaker but an indulgence of fuzzy theory is nevertheless very absent in case of buchholz relay. Here in this paper discussion has been drawn in favor of the Artificial Intelligent Buchholz (AIB relay where inputs are level of transformer oil and rate of oil rising due to over current. To fit with the transformer tank it is needed to measure level of transformer oil and the rate at which volume increasing. The constructional feature of a rate of rise pressure relay is taken into account in this work along with the working principle of a buchholz relay. The change in the inputs will give a crisp output to change the contacts state from normally closed to normally open by tripping via alarm circuit indeed like the basic buchholz relay does. The entire concept has been developed under MATLAB environment using Mamdani based Fuzzy Inference System. Experimental output data validates the implementation of Transformer Protection by Using Fuzzy Logic Based Artificial Intelligent Buchholz Relay.
Brézillon, P J; Zaraté, P; Saci, F
We present an approach for designing a knowledge-based system, called Sequence Acquisition In Context (SAIC), that will be able to cooperate with a biologist in the analysis of DNA sequences. The main task of the system is the acquisition of the expert knowledge that the biologist uses for solving ambiguities from gel autoradiograms, with the aim of re-using it later for solving similar ambiguities. The various types of expert knowledge constitute what we call the contextual knowledge of the sequence analysis. Contextual knowledge deals with the unavoidable problems that are common in the study of the living material (eg noise on data, difficulties of observations). Indeed, the analysis of DNA sequences from autoradiograms belongs to an emerging and promising area of investigation, namely reasoning with images. The SAIC project is developed in a theoretical framework that is shared with other applications. Not all tasks have the same importance in each application. We use this observation for designing an intelligent assistant system with three applications. In the SAIC project, we focus on knowledge acquisition, human-computer interaction and explanation. The project will benefit research in the two other applications. We also discuss our SAIC project in the context of large international projects that aim to re-use and share knowledge in a repository.
Bannwart, Lisiane Cristina; Goiato, Marcelo Coelho; dos Santos, Daniela Micheline; Moreno, Amália; Pesqueira, Aldiéris Alves; Haddad, Marcela Filié; Andreotti, Agda Marobo; de Medeiros, Rodrigo Antonio
Ocular prostheses are important determinants of their users' aesthetic recovery and self-esteem. Because of use, ocular prostheses longevity is strongly affected by instability of the iris color due to polymerization. The goal of this study is to examine how the color of the artificial iris button is affected by different techniques of artificial wear and by the application of varnish following polymerization of the colorless acrylic resin that covers the colored paint. We produce 60 samples (n=10) according to the wear technique applied: conventional technique without varnish (PE); conventional technique with varnish (PEV); technique involving a prefabricated cap without varnish (CA); technique involving a prefabricated cap with varnish (CAV); technique involving inverted painting without varnish (PI); and technique involving inverted painting with varnish (PIV). Color readings using a spectrophotometer are taken before and after polymerization. We submitted the data obtained to analyses of variance and Tukey's test (P<0.05). The color test shows significant changes after polymerization in all groups. The PE and PI techniques have clinically acceptable values of ΔE, independent of whether we apply varnish to protect the paint. The PI technique produces the least color change, whereas the PE and CA techniques significantly improve color stability.
Najmaei, Nima; Kermani, Mehrdad R
The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.
Hockaday, Stephen; Kuhlenschmidt, Sharon (Editor)
The objective of the workshop was to explore the role of human factors in facilitating the introduction of artificial intelligence (AI) to advanced air traffic control (ATC) automation concepts. AI is an umbrella term which is continually expanding to cover a variety of techniques where machines are performing actions taken based upon dynamic, external stimuli. AI methods can be implemented using more traditional programming languages such as LISP or PROLOG, or they can be implemented using state-of-the-art techniques such as object-oriented programming, neural nets (hardware or software), and knowledge based expert systems. As this technology advances and as increasingly powerful computing platforms become available, the use of AI to enhance ATC systems can be realized. Substantial efforts along these lines are already being undertaken at the FAA Technical Center, NASA Ames Research Center, academic institutions, industry, and elsewhere. Although it is clear that the technology is ripe for bringing computer automation to ATC systems, the proper scope and role of automation are not at all apparent. The major concern is how to combine human controllers with computer technology. A wide spectrum of options exists, ranging from using automation only to provide extra tools to augment decision making by human controllers to turning over moment-by-moment control to automated systems and using humans as supervisors and system managers. Across this spectrum, it is now obvious that the difficulties that occur when tying human and automated systems together must be resolved so that automation can be introduced safely and effectively. The focus of the workshop was to further explore the role of injecting AI into ATC systems and to identify the human factors that need to be considered for successful application of the technology to present and future ATC systems.
Mishra, Dhirendra; Goyal, P.; Upadhyay, Abhishek
Delhi has been listed as the worst performer across the world with respect to the presence of alarmingly high level of haze episodes, exposing the residents here to a host of diseases including respiratory disease, chronic obstructive pulmonary disorder and lung cancer. This study aimed to analyze the haze episodes in a year and to develop the forecasting methodologies for it. The air pollutants, e.g., CO, O3, NO2, SO2, PM2.5 as well as meteorological parameters (pressure, temperature, wind speed, wind direction index, relative humidity, visibility, dew point temperature, etc.) have been used in the present study to analyze the haze episodes in Delhi urban area. The nature of these episodes, their possible causes, and their major features are discussed in terms of fine particulate matter (PM2.5) and relative humidity. The correlation matrix shows that temperature, pressure, wind speed, O3, and dew point temperature are the dominating variables for PM2.5 concentrations in Delhi. The hour-by-hour analysis of past data pattern at different monitoring stations suggest that the haze hours were occurred approximately 48% of the total observed hours in the year, 2012 over Delhi urban area. The haze hour forecasting models in terms of PM2.5 concentrations (more than 50 μg/m3) and relative humidity (less than 90%) have been developed through artificial intelligence based Neuro-Fuzzy (NF) techniques and compared with the other modeling techniques e.g., multiple linear regression (MLR), and artificial neural network (ANN). The haze hour's data for nine months, i.e. from January to September have been chosen for training and remaining three months, i.e., October to December in the year 2012 are chosen for validation of the developed models. The forecasted results are compared with the observed values with different statistical measures, e.g., correlation coefficients (R), normalized mean square error (NMSE), fractional bias (FB) and index of agreement (IOA). The performed
Khashayar Danesh Narooei
Full Text Available Today, in most of metal machining process, Computer Numerical Control (CNC machine tools have been very popular due to their efficiencies and repeatability to achieve high accuracy positioning. One of the factors that govern the productivity is the tool path travel during cutting a work piece. It has been proved that determination of optimal cutting parameters can enhance the machining results to reach high efficiency and minimum the machining cost. In various publication and articles, scientist and researchers adapted several Artificial Intelligence (AI methods or hybrid method for tool path optimization such as Genetic Algorithms (GA, Artificial Neural Network (ANN, Artificial Immune Systems (AIS, Ant Colony Optimization (ACO and Particle Swarm Optimization (PSO. This study presents a review of researches in tool path optimization with different types of AI methods that show the capability of using different types of optimization methods in CNC machining process.
2015. LIST OF ACRONYMS AG Artificial Grammar AGL Artificial Grammar Learning DRE Dominant Real Eigenvalues ISR Intelligence ...AFRL-RH-WP-TR-2015-0037 TOPOLOGICAL ENTROPY MEASURE OF ARTIFICIAL GRAMMAR COMPLEXITY FOR USE IN DESIGNING EXPERIMENTS ON HUMAN PERFORMANCE IN... INTELLIGENCE , SURVEILLANCE, AND RECONNAISSANCE (ISR) TASKS Richard Warren, Ph.D. Human Analyst Augmentation Branch 711 Human Performance Wing
Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif
The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.
Akın, Serhat; Kok, Mustafa V.; Uraz, Irtek
This research proposes a framework for determining the optimum location of an injection well using an inference method, artificial neural networks and a search algorithm to create a search space and locate the global maxima. A complex carbonate geothermal reservoir (Kizildere Geothermal field, Turkey) production history is used to evaluate the proposed framework. Neural networks are used as a tool to replicate the behavior of commercial simulators, by capturing the response of the field given a limited number of parameters such as temperature, pressure, injection location, and injection flow rate. A study on different network designs indicates that a combination of neural network and an optimization algorithm (explicit search with variable stepping) to capture local maxima can be used to locate a region or a location for optimum well placement. Results also indicate shortcomings and possible pitfalls associated with the approach. With the provided flexibility of the proposed workflow, it is possible to incorporate various parameters including injection flow rate, temperature, and location. For the field of study, optimum injection well location is found to be in the southeastern part of the field. Specific locations resulting from the workflow indicated a consistent search space, having higher values in that particular region. When studied with fixed flow rates (2500 and 4911 m 3/day), a search run through the whole field located two locations which are in the very same region resulting in consistent predictions. Further study carried out by incorporating effect of different flow rates indicates that the algorithm can be run in a particular region of interest and different flow rates may yield different locations. This analysis resulted with a new location in the same region and an optimum injection rate of 4000 m 3/day). It is observed that use of neural network, as a proxy to numerical simulator is viable for narrowing down or locating the area of interest for
Denton, Richard V.; Froeberg, Peter L.
This paper addresses the problem of route planning for ground vehicles. The problem is decomposed into two principal sub-problems: manipulation of a multi-dimensional knowledge base to result in a "composite map" consistent with the current mission goals, and a subsequent search procedure applied to this composite map to result in high performance routes. The relevance of expert systems and other techniques for route planning is discussed. A particularly efficient search procedure is applied to several example composite maps to demonstrate the power of the approach.
Paradigms of AI Programming is the first text to teach advanced Common Lisp techniques in the context of building major AI systems. By reconstructing authentic, complex AI programs using state-of-the-art Common Lisp, the book teaches students and professionals how to build and debug robust practical programs, while demonstrating superior programming style and important AI concepts. The author strongly emphasizes the practical performance issues involved in writing real working programs of significant size. Chapters on troubleshooting and efficiency are included, along with a discussion of th
Full Text Available The properties of a formulation are determined not only by the ratios in which the ingredients are combined but also by the processing conditions. Although the relationships between the ingredient levels, processing conditions, and product performance may be known anecdotally, they can rarely be quantified. In the past, formulators tended to use statistical techniques to model their formulations, relying on response surfaces to provide a mechanism for optimazation. However, the optimization by such a method can be misleading, especially if the formulation is complex. More recently, advances in mathematics and computer science have led to the development of alternative modeling and data mining techniques which work with a wider range of data sources: neural networks (an attempt to mimic the processing of the human brain; genetic algorithms (an attempt to mimic the evolutionary process by which biological systems self-organize and adapt, and fuzzy logic (an attempt to mimic the ability of the human brain to draw conclusions and generate responses based on incomplete or imprecise information. In this review the current technology will be examined, as well as its application in pharmaceutical formulation and processing. The challenges, benefits and future possibilities of neural computing will be discussed.
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.
Ponta, L.; Raberto, M.; Cincotti, S.
In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.
Kruse, F. A.
This project was a three year study at the Center for the Study of Earth from Space (CSES) within the Cooperative Institute for Research in Environmental Science (CIRES) at the University of Colorado, Boulder. The goal of this research was to develop an expert system to allow automated identification of geologic materials based on their spectral characteristics in imaging spectrometer data such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). This requirement was dictated by the volume of data produced by imaging spectrometers, which prohibits manual analysis. The research described is based on the development of automated techniques for analysis of imaging spectrometer data that emulate the analytical processes used by a human observer. The research tested the feasibility of such an approach, implemented an operational system, and tested the validity of the results for selected imaging spectrometer data sets.
An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.
AHMED A. MAHFOUZ
Full Text Available This paper describes intelligent direct torque control (DTC technique for Permanent Magnet Synchronous Motor (PMSM drive based on Adaptive Neuro Fuzzy Inference Systems (ANFIS. The proposed system has proven successful in controlling the instantaneous torque so as not to depend only on the estimation flux, torque and position, but also the estimation of the lookup table and the generation of driver switching table. Experimental results prove the MATLAB simulation results for torque, speed and flux estimations.
Barceló, J. A.; Moitinho de Almeida, V.
Why archaeological artefacts are the way they are? In this paper we try to solve such a question by investigating the relationship between form and function. We propose new ways of studying the way behaviour in the past can be asserted on the examination of archaeological observables in the present. In any case, we take into account that there are also non-visual features characterizing ancient objects and materials (i.e., compositional information based on mass spectrometry data, chronological information based on radioactive decay measurements, etc.). Information that should make us aware of many functional properties of objects is multidimensional in nature: size, which makes reference to height, length, depth, weight and mass; shape and form, which make reference to the geometry of contours and volumes; texture, which refers to the microtopography (roughness, waviness, and lay) and visual appearance (colour variations, brightness, reflectivity and transparency) of surfaces; and finally material, meaning the combining of distinct compositional elements and properties to form a whole. With the exception of material data, the other relevant aspects for functional reasoning have been traditionally described in rather ambiguous terms, without taking into account the advantages of quantitative measurements of shape/form, and texture. Reasoning about the functionality of archaeological objects recovered at the archaeological site requires a cross-disciplinary investigation, which may also range from recognition techniques used in computer vision and robotics to reasoning, representation, and learning methods in artificial intelligence. The approach we adopt here is to follow current computational theories of object perception to ameliorate the way archaeology can deal with the explanation of human behaviour in the past (function) from the analysis of visual and non-visual data, taking into account that visual appearances and even compositional characteristics only
Kerr, D.; Thompson, L.; Shenoi, S.
The basis of this research is to apply novel techniques from Artificial Intelligence and Expert Systems in capturing, integrating and articulating key knowledge from geology, geostatistics, and petroleum engineering to develop accurate descriptions of petroleum reservoirs. The ultimate goal is to design and implement a single powerful expert system for use by small producers and independents to efficiently exploit reservoirs. The main challenge of the proposed research is to automate the generation of detailed reservoir descriptions honoring all the available soft and hard data that ranges from qualitative and semi-quantitative geological interpretations to numeric data obtained from cores, well tests, well logs and production statistics. Additional challenges are the verification and validation of the expert system, since much of the interpretation of the experts is based on extended experience in reservoir characterization. The overall project plan to design the system to create integrated reservoir descriptions begins by initially developing an AI-based methodology for producing large-scale reservoir descriptions generated interactively from geology and well test data. Parallel to this task is a second task that develops an AI-based methodology that uses facies-biased information to generate small-scale descriptions of reservoir properties such as permeability and porosity. The third task involves consolidation and integration of the large-scale and small-scale methodologies to produce reservoir descriptions honoring all the available data. The final task will be technology transfer. With this plan, the authors have carefully allocated and sequenced the activities involved in each of the tasks to promote concurrent progress towards the research objectives. Moreover, the project duties are divided among the faculty member participants. Graduate students will work in terms with faculty members.
Tiwari, Arvind Kumar; Srivastava, Rajeev
During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction.
Bayindir, R.; Colak, I. [Department of Electrical Education, Faculty of Technical Education, Gazi University, Besevler, 06500 Ankara (Turkey); Sagiroglu, S. [Department of Computer Engineering, Faculty of Engineering and Architecture, Celal Bayar Bulvari, Gazi University, Maltepe, 06570 Ankara (Turkey)
An intelligent power factor correction approach based on artificial neural networks (ANN) is introduced. Four learning algorithms, backpropagation (BP), delta-bar-delta (DBD), extended delta-bar-delta (EDBD) and directed random search (DRS), were used to train the ANNs. The best test results obtained from the ANN compensators trained with the four learning algorithms were first achieved. The parameters belonging to each neural compensator obtained from an off-line training were then inserted into a microcontroller for on-line usage. The results have shown that the selected intelligent compensators developed in this work might overcome the problems occurred in the literature providing accurate, simple and low-cost solution for compensation. (author)
Reiterer, Alexander; Egly, Uwe; Vicovac, Tanja; Mai, Enrico; Moafipoor, Shahram; Grejner-Brzezinska, Dorota A.; Toth, Charles K.
Artificial Intelligence (AI) is one of the key technologies in many of today's novel applications. It is used to add knowledge and reasoning to systems. This paper illustrates a review of AI methods including examples of their practical application in Geodesy like data analysis, deformation analysis, navigation, network adjustment, and optimization of complex measurement procedures. We focus on three examples, namely, a geo-risk assessment system supported by a knowledge-base, an intelligent dead reckoning personal navigator, and evolutionary strategies for the determination of Earth gravity field parameters. Some of the authors are members of IAG Sub-Commission 4.2 - Working Group 4.2.3, which has the main goal to study and report on the application of AI in Engineering Geodesy.
National Aeronautics and Space Administration — Kennedy Space Center (KSC) has the most complex, enormous, difficult, diverse, distributed, and unique set of integrated scheduling problems in the world and it is...
National Aeronautics and Space Administration — The ultimate goal is the automation of a large amount of KSC's planning, scheduling, and execution decision making. Phase II will result in a complete full-scale...
Ali, Moonis; Gupta, U. K.
An expert system is being developed which can detect anomalies in Space Shuttle Main Engine (SSME) sensor data significantly earlier than the redline algorithm currently in use. The training of such an expert system focuses on two approaches which are based on low frequency and high frequency analyses of sensor data. Both approaches are being tested on data from SSME tests and their results compared with the findings of NASA and Rocketdyne experts. Prototype implementations have detected the presence of anomalies earlier than the redline algorithms that are in use currently. It therefore appears that these approaches have the potential of detecting anomalies early eneough to shut down the engine or take other corrective action before severe damage to the engine occurs.
Tanner, Steve; Graves, Sara J.
The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.
all other .noe comnctn t htnd.Ti Snodes communicatin with that node. This would require longer than nearest neighbor * communication or one node hops...Publications Inc, 1981. ART 3.0. Reference Manual. Inference Corporation , Los Angeles, CA, January 1987. Baer, Jean-Loup. Computer Systems Architecture...consultant for Witco Chemical Corporation , Petrolia, Pennsylvania, until called to active duty in October 1982. He served as a computer software
In Computer Graphics, the use of intelligent techniques started more recently than in other research areas. However, during these last two decades, the use of intelligent Computer Graphics techniques is growing up year after year and more and more interesting techniques are presented in this area. The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volumes “Artificial Intelligence Techniques for Computer Graphics” (2008), “Intelligent Computer Graphics 2009” (2009), “Intelligent Computer Graphics 2010” (2010) and “Intelligent Computer Graphics 2011” (2011). Usually, this kind of volume contains, every year, selected extended papers from the corresponding 3IA Conference of the year. However, the current volume is made from directly reviewed and selected papers, submitted for publication in the volume “Intelligent Computer Gr...
Ryoo, Young; Jang, Moon-soo; Bae, Young-Chul
Intelligent systems have been initiated with the attempt to imitate the human brain. People wish to let machines perform intelligent works. Many techniques of intelligent systems are based on artificial intelligence. According to changing and novel requirements, the advanced intelligent systems cover a wide spectrum: big data processing, intelligent control, advanced robotics, artificial intelligence and machine learning. This book focuses on coordinating intelligent systems with highly integrated and foundationally functional components. The book consists of 19 contributions that features social network-based recommender systems, application of fuzzy enforcement, energy visualization, ultrasonic muscular thickness measurement, regional analysis and predictive modeling, analysis of 3D polygon data, blood pressure estimation system, fuzzy human model, fuzzy ultrasonic imaging method, ultrasonic mobile smart technology, pseudo-normal image synthesis, subspace classifier, mobile object tracking, standing-up moti...
Cordova-Fraga, T.; Martinez-Espinosa, J. C.; Bernal, J.; Huerta-Franco, R.; Sosa-Aquino, M.; Vargas-Luna, M.
A simple system using artificial vision technique for measuring the volume of solid objects is described. The system is based on the acquisition of an image sequence of the object while it is rotating on an automated mechanism controlled by a PC. Volumes of different objects such as a sphere, a cylinder and also a carrot were measured. The proposed algorithm was developed in environment LabView 6.1. This technique can be very useful when it is applied to measure the human body for evaluating its body composition.
Awret, Uziel; Chalmers, David
This volume represents the combination of two special issues of the Journal of Consciousness Studies on the topic of the technological singularity. Could artificial intelligence really out-think us, and what would be the likely repercussions if it could? Leading authors contribute to the debate, which takes the form of a target chapter by philosopher David Chalmers, plus commentaries from the likes of Daniel Dennett, Nick Bostrom, Ray Kurzweil, Ben Goertzel, Frank Tipler, among many others. Chalmers then responds to the commentators to round off the discussion.
Full Text Available This paper proposed a new idea in comparing two common predictors i.e. the statistic method and artificial intelligence (AI for rainfall prediction using empirical data series. The statistic method uses Auto- Regressive Integrated Moving (ARIMA and Adaptive Splines Threshold Autoregressive (ASTAR, most favorable statistic tools, while in the AI, combination of Genetic Algorithm-Neural Network (GA-NN is chosen. The results show that ASTAR gives best prediction compare to others, in term of root mean square (RMSE and following trend between prediction and actual.
Huber, Justin; Straub, Jeremy
An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.
Pagliarini, Luigi; Lund, Henrik Hautop
of physical and functional modules, we created an artistic instantiation of such a concept with the Parallel Relational Universes, allowing arts alumni to remix artistic expressions. Here, we report the data emerged from a first pre-test, run with gymnasium's alumni. We then report both the artistic...... and the psychological findings. We describe the modern artificial intelligence implementation of this instrument. Between an art piece and a psychological test, at a first cognitive analysis, it seems to be a promising research tool. In the discussion we speculate about potential industrial applications, as well....
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
Information about drug metabolism is an essential component of drug development. Modeling the drug metabolism requires identification of the involved enzymes, rate and extent of metabolism, the sites of metabolism etc. There has been continuous attempts in the prediction of metabolism of drugs using artificial intelligence in effort to reduce the attrition rate of drug candidates entering to preclinical and clinical trials. Currently, there are number of predictive models available for metabolism using Support vector machines, Artificial neural networks, Bayesian classifiers etc. There is an urgent need to review their progress so far and address the existing challenges in prediction of metabolism. In this attempt, we are presenting the currently available literature models and some of the critical issues regarding prediction of drug metabolism.
Full Text Available In laser cutting, the cut quality is of great importance. Multiple non-linear effects of process parameters and their interactions make very difficult to predict cut quality. In this paper, artificial intelligence (AI approach was applied to predict the surface roughness in CO2 laser cutting. To this aim, artificial neural network (ANN model of surface roughness was developed in terms of cutting speed, laser power and assist gas pressure. The experimental results obtained from Taguchi’s L25 orthogonal array were used to develop ANN model. The ANN mathematical model of surface roughness was expressed as explicit nonlinear function of the selected input parameters. Statistical results indicate that the ANN model can predict the surface roughness with good accuracy. It was showed that ANNs may be used as a good alternative in analyzing the effects of cutting parameters on the surface roughness.
Sousa, V; Matos, J P; Almeida, N; Saldanha Matos, J
Operation, maintenance and rehabilitation comprise the main concerns of wastewater infrastructure asset management. Given the nature of the service provided by a wastewater system and the characteristics of the supporting infrastructure, technical issues are relevant to support asset management decisions. In particular, in densely urbanized areas served by large, complex and aging sewer networks, the sustainability of the infrastructures largely depends on the implementation of an efficient asset management system. The efficiency of such a system may be enhanced with technical decision support tools. This paper describes the role of artificial intelligence tools such as artificial neural networks and support vector machines for assisting the planning of operation and maintenance activities of wastewater infrastructures. A case study of the application of this type of tool to the wastewater infrastructures of Sistema de Saneamento da Costa do Estoril is presented.
Pugliesi, R; Andrade, M L G; Menezes, M O; Pereira, M A S; Maizato, M J S
The neutron radiography technique was employed to inspect an artificial heart prototype which is being developed to provide blood circulation for patients expecting heart transplant surgery. The radiographs have been obtained by the direct method with a gadolinium converter screen along with the double coated Kodak-AA emulsion film. The artificial heart consists of a flexible plastic membrane located inside a welded metallic cavity, which is employed for blood pumping purposes. The main objective of the present inspection was to identify possible damages in this plastic membrane, produced during the welding process of the metallic cavity. The obtained radiographs were digitized as well as analysed in a PC and the improved images clearly identify several damages in the plastic membrane, suggesting changes in the welding process.
AI based Tutoring and Learning Path Adaptation are well known concepts in e-Learning scenarios today and increasingly applied in modern learning environments. In order to gain more flexibility and to enhance existing e-learning platforms, the OPUS One LMS Extension package will enable a generic Intelligent Tutored Adaptive Learning Environment, based on a holistic Multidimensional Instructional Design Model (PENTHA ID Model), allowing AI based tutoring and adaptation functionality to existing Web-based e-learning systems. Relying on "real time" adapted profiles, it allows content- / course authors to apply a dynamic course design, supporting tutored, collaborative sessions and activities, as suggested by modern pedagogy. The concept presented combines a personalized level of surveillance, learning activity- and learning path adaptation suggestions to ensure the students learning motivation and learning success. The OPUS One concept allows to implement an advanced tutoring approach combining "expert based" e-tutoring with the more "personal" human tutoring function. It supplies the "Human Tutor" with precise, extended course activity data and "adaptation" suggestions based on predefined subject matter rules. The concept architecture is modular allowing a personalized platform configuration.
Hopgood, Adrian A
The third edition of this bestseller examines the principles of artificial intelligence and their application to engineering and science, as well as techniques for developing intelligent systems to solve practical problems. Covering the full spectrum of intelligent systems techniques, it incorporates knowledge-based systems, computational intelligence, and their hybrids. Using clear and concise language, Intelligent Systems for Engineers and Scientists, Third Edition features updates and improvements throughout all chapters. It includes expanded and separated chapters on genetic algorithms and
de Croon, G C H E; Remes, B D W; Ruijsink, R; De Wagter, C
This book introduces the topics most relevant to autonomously flying flapping wing robots: flapping-wing design, aerodynamics, and artificial intelligence. Readers can explore these topics in the context of the "Delfly", a flapping wing robot designed at Delft University in The Netherlands. How are tiny fruit flies able to lift their weight, avoid obstacles and predators, and find food or shelter? The first step in emulating this is the creation of a micro flapping wing robot that flies by itself. The challenges are considerable: the design and aerodynamics of flapping wings are still active areas of scientific research, whilst artificial intelligence is subject to extreme limitations deriving from the few sensors and minimal processing onboard. This book conveys the essential insights that lie behind success such as the DelFly Micro and the DelFly Explorer. The DelFly Micro, with its 3.07 grams and 10 cm wing span, is still the smallest flapping wing MAV in the world carrying a camera, whilst the DelFly Expl...
Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad
Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.
Al-Kayiem Hussain H.
Full Text Available Steam boilers are considered as a core of any steam power plant. Boilers are subjected to various types of trips leading to shut down of the entire plant. The tube leakage is the worse among the common boiler faults, where the shutdown period lasts for around four to five days. This paper describes the rules of the Artificial Intelligent Systems to diagnosis the boiler variables prior to tube leakage occurrence. An Intelligent system based on Artificial Neural Network was designed and coded in MATLAB environment. The ANN was trained and validated using real site data acquired from coal fired power plant in Malaysia. Ninety three boiler operational variables were identified for the present investigation based on the plant operator experience. Various neural networks topology combinations were investigated. The results showed that the NN with two hidden layers performed better than one hidden layer using Levenberg-Maquardt training algorithm. Moreover, it was noticed that hyperbolic tangent function for input and output nodes performed better than other activation function types.
We are currently witnessing an evolution from building and home automation to smart homes, driven by progressing maturity of the Internet of Things and the use of artificial intelligence. However, significant technological challenges such as immature home intelligence, huge network and central server processing load; and embedded resource usage, still need to be addressed. Until now, most of the research in this area has focused on centralized architectures for smart homes. This work contribu...
The current status is described on artificial intelligence researches in the U.S.A. being carried out by researchers at research companies and universities. The traditional researchers of artificial intelligence believe in the usefulness of inference, learning and symbol processing. They also believe that more excellent machines can be made if more sophisticated algorithms and higher speed hardwares make debuts. On the one hand, young researchers try to design a perfect mechanical organism that moves using the same motion principles as the reflective motions of animals so that they can evade the huge amount of labor that appears rational apparently by some means that they can think of. This paper introduces additionally the results of researches performed by university researchers on the problems in conventional artificial intelligence researches. A large number of researches are under way aiming at producing a machine that has an artificial intelligence capable of expressing human inference and self-consciousness. However, because of the approach taken only on the aspect of intelligence, the current state is such that no capability of even at an insect level has been realized. 4 refs., 6 figs.
between technical and artistic minded students is, however, increased once the students reach the sixth semester. The complex algorithms of the artificial intelligence course seemed to demotivate the artistic minded students even before the course began. This paper will present the extensive changes made...... to the sixth semester artificial intelligence programming course, in order to provide a highly motivating direct visual feedback, and thereby remove the steep initial learning curve for artistic minded students. The framework was developed with close dialog to both the game industry and experienced master...
Advanced research and its applicability were surveyed to apply the advanced functional cells to industry. The basic target was set to develop, produce, control and utilize the functional cells, such as intelligent materials and self-regulation bioreactors. The regulation factors regarding apotosis, which is a process of cell suicide programmed within the cell itself of multicellular organisms, cell cycle and aging/ageless were investigated. Furthermore, the function of regulatory factors was investigated at the protein level. Injection of factors regulating cellular function and tissue engineering required for the regulation of cell proliferation were investigated. Tissue engineering is considered to be the intracellular regulation by gene transduction and the extracellular regulation by culture methods, such as coculture. Analysis methods for cell proliferation and function of living cells were investigated using the probes recognizing molecular structure. Novel biomaterials, artificial organ systems, cellular therapy and useful materials were investigated for utilizing the regulation techniques of cell proliferation. 425 refs., 85 figs., 9 tabs.
There is a problem associated with contemporary studies of philosophy of mind, which focuses on the identification and convergence of human and machine intelligence. This is the problem of machine emulation of sense. In the present study, analysis of this problem is carried out based on concepts from structural and post-structural approaches that have been almost entirely overlooked by contemporary philosophy of mind. If we refer to the basic definitions of "sign" and "meaning" found in structuralism and post-structuralism, we see a fundamental difference between the capabilities of a machine and the human brain engaged in the processing of a sign. This research will exemplify and provide additional evidence to support distinctions between syntactic and semantic aspects of intelligence, an issue widely discussed by adepts of contemporary philosophy of mind. The research will demonstrate that some aspect of a number of ideas proposed in relation to semantics and semiosis in structuralism and post-structuralism are similar to those we find in contemporary analytical studies related to the theory and philosophy of artificial intelligence. The concluding part of the paper offers an interpretation of the problem of formalization of sense, connected to its metaphysical (transcendental) properties.
Full Text Available The development of smart sensors involves the design of reconfigurable systemscapable of working with different input sensors. Reconfigurable systems ideally shouldspend the least possible amount of time in their calibration. An autocalibration algorithmfor intelligent sensors should be able to fix major problems such as offset, variation of gainand lack of linearity, as accurately as possible. This paper describes a new autocalibrationmethodology for nonlinear intelligent sensors based on artificial neural networks, ANN.The methodology involves analysis of several network topologies and training algorithms.The proposed method was compared against the piecewise and polynomial linearizationmethods. Method comparison was achieved using different number of calibration points,and several nonlinear levels of the input signal. This paper also shows that the proposedmethod turned out to have a better overall accuracy than the other two methods. Besides,experimentation results and analysis of the complete study, the paper describes theimplementation of the ANN in a microcontroller unit, MCU. In order to illustrate themethod capability to build autocalibration and reconfigurable systems, a temperaturemeasurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
Shiraishi, Y; Yambe, T; Yoshizawa, M; Hashimoto, H; Yamada, A; Miura, H; Hashem, M; Kitano, T; Shiga, T; Homma, D
Annuloplasty for functional mitral or tricuspid regurgitation has been made for surgical restoration of valvular diseases. However, these major techniques may sometimes be ineffective because of chamber dilation and valve tethering. We have been developing a sophisticated intelligent artificial papillary muscle (PM) by using an anisotropic shape memory alloy fiber for an alternative surgical reconstruction of the continuity of the mitral structural apparatus and the left ventricular myocardium. This study exhibited the mitral regurgitation with regard to the reduction in the PM tension quantitatively with an originally developed ventricular simulator using isolated goat hearts for the sophisticated artificial PM. Aortic and mitral valves with left ventricular free wall portions of isolated goat hearts (n=9) were secured on the elastic plastic membrane and statically pressurized, which led to valvular leaflet-papillary muscle positional change and central mitral regurgitation. PMs were connected to the load cell, and the relationship between the tension of regurgitation and PM tension were measured. Then we connected the left ventricular specimen model to our hydraulic ventricular simulator and achieved hemodynamic simulation with the controlled tension of PMs.
Full Text Available This project is looking for increasing return on investment, by presenting models based on artificial intelligence. Investment in financial markets could be considered in short-term (daily and middle-term (monthly basis/ hence the daily data in Tehran Stock Exchange and the rates of foreign exchange and gold coins have been extracted for the period Mar. 2010 to Sep. 2012 and recorded as the data into the neural networks and the genetic programming model. Also the monthly rate of return and risk of 20 active companies of the stock exchange, and the monthly risk values of foreign exchange and gold coin, as well as bank deposits were used as genetic algorithms in order to provide optimum investment portfolios for the investors. The results obtained from executing the models indicates the efficiency of both methods of artificial neural network and also genetic programming in the short-term financial markets predictions, but artificial neural networks show a better efficiency. Also the efficiency of geneticalgorithm was approved in improving the rate of return and risks, via identifying the optimum investment portfolios.
DU Jun-ping; TU Xu-yan
This paper proposes a concept and design strategy for the humanoid intelligent management system (HIMS) based on artificial life. Various topics are discussed including the design method and implementation techniques for the dual management scheme (DMS), humanoid intelligent management model (HIMM), central-decentralized management pattern, and multi-grade coordination function.
This book proposes new algorithms to ensure secured communications and prevent unauthorized data exchange in secured multimedia systems. Focusing on numerous applications’ algorithms and scenarios, it offers an in-depth analysis of data hiding technologies including watermarking, cryptography, encryption, copy control, and authentication. The authors present a framework for visual data hiding technologies that resolves emerging problems of modern multimedia applications in several contexts including the medical, healthcare, education, and wireless communication networking domains. Further, it introduces several intelligent security techniques with real-time implementation. As part of its comprehensive coverage, the book discusses contemporary multimedia authentication and fingerprinting techniques, while also proposing personal authentication/recognition systems based on hand images, surveillance system security using gait recognition, face recognition under restricted constraints such as dry/wet face condi...
Beasley, Robert; Bryant, Nathan L.; Dodson, Phillip T.; Entwistle, Kevin C.
The purpose of this study was to investigate the effects of textisms (i.e., abbreviated spellings, acronyms, and other shorthand notations) on learning, study time, and instructional perceptions in an online artificial intelligence instructional module. The independent variable in this investigation was experimental condition. For the control…
Sunal, Cynthia Szymanski; Karr, Charles L.; Sunal, Dennis W.
Students' conceptions of three major artificial intelligence concepts used in the modeling of systems in science, fuzzy logic, neural networks, and genetic algorithms were investigated before and after a higher education science course. Students initially explored their prior ideas related to the three concepts through active tasks. Then,…
Dries, Monique Henriëtte van den
Artificial intelligence is an integrated part of our daily life and of many fields in research. In archaeology, however, it does not (yet) play an important role. In the past twenty years archaeologists have discussed the potentials of, in particular, expert systems. They have developed some valuabl
Rusu, Teodora; Gogan, Oana Marilena
This paper describes the use of artificial intelligence method in copolymer networks design. In the present study, we pursue a hybrid algorithm composed from two research themes in the genetic design framework: a Kohonen neural network (KNN), path (forward problem) combined with a genetic algorithm path (backward problem). The Tabu Search Method is used to improve the performance of the genetic algorithm path.
In 1983 Gardner put forward the theory of "multiple intelligences". In the theory he argued that people vary in terms of eight types of intelligence. In 2001, a scholar named Jeanette Littlemore prompted to add the ninth type to the "multiple intelligence". On the shoulders of these giants, I argued that metaphoric intelligence is an important aspect of intelligence, and that it can contribute to language learning success. It is thought to play a role in communicative competence and communication strategy usage. A number of activities are suggested which are designed to exploit and promote metaphoric intelligence in the language classroom.
El Ouahed, Abdelkader Kouider; Mazouzi, Amine [Sonatrach, Rue Djenane Malik, Hydra, Algiers (Algeria); Tiab, Djebbar [Mewbourne School of Petroleum and Geological Engineering, The University of Oklahoma, 100 East Boyd Street, SEC T310, Norman, OK, 73019 (United States)
In highly heterogeneous reservoirs classical characterization methods often fail to detect the location and orientation of the fractures. Recent applications of Artificial Intelligence to the area of reservoir characterization have made this challenge a possible practice. Such a practice consists of seeking the complex relationship between the fracture index and some geological and geomechanical drivers (facies, porosity, permeability, bed thickness, proximity to faults, slopes and curvatures of the structure) in order to obtain a fracture intensity map using Fuzzy Logic and Neural Network. This paper shows the successful application of Artificial Intelligence tools such as Artificial Neural Network and Fuzzy Logic to characterize naturally fractured reservoirs. A 2D fracture intensity map and fracture network map in a large block of Hassi Messaoud field have been developed using Artificial Neural Network and Fuzzy Logic. This was achieved by first building the geological model of the permeability, porosity and shale volume using stochastic conditional simulation. Then by applying some geomechanical concepts first and second structure directional derivatives, distance to the nearest fault, and bed thickness were calculated throughout the entire area of interest. Two methods were then used to select the appropriate fracture intensity index. In the first method well performance was used as a fracture index. In the second method a Fuzzy Inference System (FIS) was built. Using this FIS, static and dynamic data were coupled to reduce the uncertainty, which resulted in a more reliable Fracture Index. The different geological and geomechanical drivers were ranked with the corresponding fracture index for both methods using a Fuzzy Ranking algorithm. Only important and measurable data were selected to be mapped with the appropriate fracture index using a feed forward Back Propagation Neural Network (BPNN). The neural network was then used to obtain a fracture intensity