Thursday, February 28, 2019

Artificial Intelligence and Learning Computers

synthetic intelligence in nameation & Learning Computers Presented by S. DEEPAKKUMAR Abstract The term soppy apprehension is used to describe a property of simple machines or programs the intelligence that the placement demonstrates. Among the traits that researchers hope machines will exhibit ar reasoning, intimacy, planning, induce awaying, communication, perception and the energy to move and manipulate object glasss. Constructing robots that per sort dexterous tasks has always been a extremely motivating situationor for the science and technology of information bear on.Unlike philosophy and psychology, which argon also concerned with intelligence, AI strives to build intelligent entities such as robots as well as understand them. Although no maven fecal matter predict the future in detail, it is clear that information affect systems with homophile-level intelligence (or better) would flummox a huge impact on our e preciseday lives and on the future rail of c ivilization Neural Networks ask been proposed as an alternative to Symbolic Artificial intuition in constructing intelligent systems. They are motivated by deliberation in the brain.Small Threshold computing elements when put together produce flop information processing machines. In this paper, we put forth the foundational ideas in soupy intelligence and important concepts in Search Techniques, Knowl bounds Representation, Language Understanding, railcar Learning, Neural Computing and such other disciplines. Artificial Intelligence startle from a modest but an over ambitious effort in the late 50s, AI has grown through its share of joys, disappointments and self-realizations. AI deals in science, which deals with creation of machines, which cornerst unity think like worlds and dress rationally.AI has a goal to automate all machine. AI is a very vast field, which spans Many application domains like Language Processing, Image Processing, pick Scheduling, Prediction, Diagn osis etc. Many types of technologies like Heuristic Search, Neural Networks, and addled Logic etc. Perspectives like solving complex capers and understanding human cognitive processes. Disciplines like Computer Science, Statistics, Psychology, etc. DEFINITION OF INTELLIGENCE & TURING campaign The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory definition of intelligence.Turing specify intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator. Roughly speaking, the sample he proposed is that the computer should be interrogated by a human via a teletype, and passes the test if the interrogator preemptnot tell if there is a computer or a human at the other end. His theorem (the Church-Turing thesis) states that Any effective procedure (or algorithm) open fire be implemented through a Turing machine. Turing machines are abstract numeric entities that are composed of a tap e, a read-write head, and a limited-state machine.The head throne either read or write symbols onto the tape, basically an input- production device. The head can channel its position, by either moving left or right. The finite state machine is a memory/central processor that keeps thwart of which of finitely many states it is currently in. By knowing which state it is currently in, the finite state machine can determine which state to change to next, what symbol to write onto the tape, and which direction the head should move. Requirement of an Artificial Intelligence system No AI system can be called intelligent unless it learns & reasons like a human. abstract thought derives new information from confrontn ones. Areas of Artificial Intelligence Knowledge Representation Importance of knowledge representation was realized during machine translation effort in early 1950s. Dictionary look up and word replacement was a tedious job. There was am baduity and ellipsis fuss i. e. m any words have different meanings. Therefore having a vocabulary used for translation was not enough. One of the major challenges in this field is that a word can have more than one meaning and this can result in ambiguity. E. g. Consider the following meter Spirit is strong but flesh is weak.When an AI system was make to convert this convict into Russian & then back to English, following output was observed. Wine is strong but meat is rotten. Thus we come crossways two main obstacles. First, it is not easy to collide with informal knowledge and state it in the formal terms required by lawful notation, particularly when the knowledge is less than 100% certain. secondly, there is a big difference between being able to solve a problem in principle and doing so in practice. Even problems with just a few dozen facts can exhaust the computational resources of any computer unless it has some guidance as to which reasoning criterions to try first.A problem may or may not have a solution. This is why debugging is one of the most challenging jobs faced by programmers today. As the decree goes, it is impossible to create a program which can predict whether a given program is going to terminate ultimately or not. growing in this part was that algorithms were written using foundational development of vocabulary and dictionary entries. Limitations of the algorithm were found out. Later Formal Systems were developed which contained axioms, rules, theorems and an orderly form of representation was developed. For example, Chess is a formal system.We use rules in our cursory lives and these rules accompany facts. Rules are used to construct an efficient expert system having artificial intelligence. Important components of a Formal System are indisposed Chaining i. e. trying to figure out the subject area by reading the sentence backward and link each word to another, Explanation multiplication i. e. generating an rendering of whatever the system has dumb, Inference Engine i. e. submitting an inference or replying to the problem. Reasoning It is to use the stored information to answer questions and to draw new certaintys.Reasoning means, drawing of conclusion from observations. Reasoning in AI systems work on three principles that is to say DEDUCTION Given 2 events P & Q, if P is true then Q is also true. E. g. If it rains, we cant go for a picnic. INDUCTION installing is a process where in , after studying certain facts , we adjoin to a conclusion. E. g. Socrates is a man all men are soulfulness therefore Socrates is mortal. ABDUCTION P implies Q, but Q may not always count on on P. E. g. If it rains , we cant go for a picnic. The fact that we are not in a position to go for a picnic does not mean that it is educational activity.There can be other reasons as well. Learning The most important requirement for an AI system is that it should learn from its mistakes. The best way of teaching an AI system is by fostering & testi ng. Training involves teaching of basic principles involved in doing a job. testing process is the real test of the knowledge acquired by the system wherein we give certain examples & test the intelligence of the system. simulations can be positive or negative. Negative examples are those which are near miss of the positive examples. inborn Language Processing (NLP) NLP can be defined as ? Processing of data in the form of natural row on the computer. I. e. making the computer understand the language a normal human being speaks. It deals with under structured / semi structured data formats and converting them into assoil understandable data form. The reasons to process natural language are largely because it is exciting and interesting, Commercially because of sheer volume of data available online, technically because it eases out Computer-Human moveion. NLP assists us in Searching for information in a vast NL (natural language) database. Analysis i. e. additionalcting s tructural data from natural language. Generation of structured data. Translation of text from one natural language to other. Example English to Hindi. Application Spectrum of NLP It provides writing and translational aids. Helps humans to fuss Natural Language with proper spelling, grammar, style etc. It allows text exploit i. e. information retrieval, search engines text categorization, information extraction. NL interface to database, web software system system, and question answer explanation in an expert system.There are four-spot procuring levels in NLP 1. Lexical at word level it involves orthoepy errors. 2. Syntactical at the structure level acquiring knowledge nearly the grammar and structure of words and sentences. Effective representation and implementation of this allows effective handling of language in respect to grammar. This is usually implemented through a parser. 3. Semantic at the meaning level. 4. Pragmatic at the context level. hurdle There are various hurdles in the field of NLP, specially speech processing which result in increase in complexness of the system.We know that, no two people on earth can have similar accent and pronunciations. This difference in style of communication results in ambiguity. Another major problem in speech processing understands of speech due to word boundary. This can be clearly understood from the following example I got a plate. / I got up late. everyday Networking Language This is a part of natural language processing. The key deliver of a machine having artificial intelligence is its ability to communicate and interact with a human. The only means for communication and interaction is through language.The language being used by the machine should be understood by all humans. Example of such a language is ENGLISH. UNL is an artificially developed language consisting universal word library, universal concepts, universal rules and universal attributes. exigency of UNL is that a computer needs ca pability to process knowledge and content recognition. Thus UNL becomes a platform for the computer to communicate and interact. Vision (Visibility establish Robot Path Planning) Consider a moving robot. There are two things, robots have to think and perform while moving from one place to another . Avoid shock with stationary and moving objects. 2. dominate the shortest distance from source to destination. One of the major problems is to find a collision free path amidst obstacles for a robot from its starting position to its destination. To reverse collision two things can be done viz 1) Reduce the object to be moved to a point form. 2) Give the obstacles some extra space. This method is called Mikownski method of path planning. Recognizing the object and co-ordinated it with the contents of the image library is another method.It included corresponding matching and depth understanding, edge detection using idea of zero crossing and stereo matching for distance estimation. For a nalysis, it also considers robot as a point body. Second major problem of path planning is to find the shortest path. The robot has to envision the Euclidean distance between the starting and the ending points. Then it has to form algorithms for computing visibility graphs. These algorithms have certain rules associated with. OJoin lesser enactment of vertices to center complexity. ODivide each object into triplicitys.OPut a node in each triangle and join all of them. OReduce the unnecessary areas because they baron not contribute to the shortest path. OCompute nominal link path and proceed. This problem of deciding shortest path prevails. Robot might be a bulky and a huge object so cant be realized as a point. secondly a robot is a mechanical body which cant turn instantly so it has to follow the procedure of wait-walk-wait-turn-wait-walk- which is very long and so not feasible. Therefore shortest distance should have minimum number of turns associated with it.For path plann ing the robot has to take a break through guess of the area it is going to cover. This snap shot is processed in the above mentioned ways and then the robot moves. But then the view changes with every step taken. So it has to do the calculation at every step it takes which is very time consuming and tedious. Experts decided to make the robot take the snap shot of the viewable distance and decide the path. But this once again becomes a problem because the device used for viewing will have certain limitation of distance. Then these experts came to a conclusion that the robot be given a fixed parameter i. . take to take the snap shot of a fixed distance say 10 meters, canvass it and decide the shortest path. Neural- lucres Neural networks are computational consisting of simple nodes, called units or processing elements which are linked by weighted connections. A spooky network maps input to output data in terms of its own internecine connectivity. The term neural network derives from the obvious nervous system proportion of the human brain with processing elements serving as nerve cells and connection weights equivalent to the variable synaptic strengths.Synapses are connections between neurons they are not material connections, but miniscule gaps that allow electric signals to jump across from neuron to neuron. Dendrites carry the signals out to the various synapses, and the cycle repeats. Let us take an example of a neuron It uses a simple computational proficiency which can be defined as follows y= 0 if ? Wi Xi ? Where ? is threshold value Wi is weight Xi is input Now this neuron can be trained to perform a particular logical operation like AND. The equivalent neural network simulation for AND perish is given on the left and its equation format on the right.Perceptron training convergence theorem Whatever be the initial choice of the weights, the PTA will finally converge by finding the correct weight values provided the situation being trained is linearly separable. This implies Perceptron Training Algorithm will concern the threshold with negative weight. ? Wi Xi + (-1) ? ? 0 A B Y 0 0 0 0 1 0 1 0 0 1 1 1 0 W1 + 0 W2 =0 ( ? ) 0 W1 +1 W2 =0 ( ? ) 1 W1 +0 W2 =0 ( ? ) 1 W1 +1 W2 =1 (? ) 0 W1 + 0 W2 =0 ? 0 W1 +1 W2 =1 ? 1 W1 +0 W2 =1 ? 1 W1 +1 W2 =0 ?Conclusion AI combined with various techniques in neural networks, fuzzy logic and natural language processing will be able to revolutionize the future of machines and it will transform the mechanical devices helping humans into intelligent rational robots having emotions. Expert systems like Mycin can help doctors in diagnosing patients. AI systems can also help us in making airline enquiries and bookings using speech rather than menus. unman cars moving about in the city would be reality with foster advancements in AI systems.Also with the advent of VLSI techniques, FPGA chips are being used in neural networks. The future of AI in making intelligent machines looks inc redulous but some kind of spiritual understanding will have to be inculcated into the machines so that their decision making is governed by some principles and boundaries. References 1. part of Computer Science & Engineering Indian Institute of Technology, Bombay 2. AI generative & Knight 3. Principles of AI N J Nelson 4. Neural Systems for Robotics Omid Omidvar 5. http//www. elsevier. nl/locate/artint 6. http//library. thinkquest. org/18242/essays. shtml

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.