Wednesday, December 28, 2016

What is Artificial Intelligence?

Artificial intelligence is a technology that implements human intelligence such as thinking and learning through computers. Artificial intelligence is conceptually classified into Strong AI and Weak AI. River AI refers to artificial intelligence with an ego that can be thought of as a free person. It is also referred to as Artificial General Intelligence (AGI) because it can do a lot of things like a human being. River AI can be divided into humanoid artificial intelligence that thinks and acts in the same way as humans, and non-humanoid artificial intelligence that perceives and thinks in a way different from human.
Medicine AI refers to artificial intelligence without self-consciousness. It is mainly developed in a form that is specialized in a specific field, and is used to supplement human limitations and increase productivity. AlphaGo, an artificial intelligence go program, and Watson, which is used in the medical field. All of the artificial intelligence developed so far belongs to the weak AI, and the river AI with the self does not appear.

Artificial Intelligence (AI)
Artificial Intelligence (AI)
The AI ​​field has made a lot of progress. Particularly in the field of VLSI (Very Large Scale Integration) and programming, great efforts have been made to study artificial intelligence in Japan and the United States. Many researchers believe that high-density integrated circuit technology can provide the hardware foundation needed to create truly intelligent machines.

Currently, intelligent computers are built with an internal structure capable of parallel processing. Parallel processing refers to the simultaneous execution of several independent operations such as memory, logic, and control using several integrated circuits with millions of central processing units (CPUs), storage, and I / O devices in one small silicon chip .

The digital computer performs these operations in series or in sequence. That is, a separate input circuit stores data in each storage device from which one piece of information is transferred and processed to the central processing unit at a time, and the result is output to the external output device. It is a general assessment that the fastest computer ever developed can do roughly 10 billion operations per second, but it is still too slow to imitate the human mindset that involves almost instantaneous association and generalization.

Artificial intelligence research

Artificial intelligence research began shortly after the development of modern digital computers in the 1940s. Early researchers quickly spotted the potential of computing devices as a means of automating thought processes. Over the years, it has been proven that computer programming can effectively perform logically complex tasks such as Theorem Proving or chess games.
However, success in this area was due to the ability of the computer to repeat the coded information at extremely high speeds rather than the ability to handle high mental functions. Until the end of the 1980s, computers that could imitate human intelligence activities were not yet developed. However, artificial intelligence research has made several useful achievements in fields related to decision making, language understanding, and shape recognition.

Expert System

Computers can use software systems based on knowledge to make decisions to solve complex problems rather than arithmetic problems. These knowledge-based software systems are called expert systems. Expert systems consist of logical rules with the form of hundreds or thousands of 'If-Then'. These rules are made from knowledge gained from experts in a particular field. In other words, imitate the knowledge and thinking ability of the experts, so that the computer can replace the professional work of the human being.

MYCIN, an interactive program, is a representative example of a heuristic learning program that utilizes an expert system. Mice find out what types of bacteria have been infected by blood tests and decide how to treat them to help doctors diagnose them. A computer programmed with a mycine first estimates a possible diagnosis of a patient's condition based on known symptoms. It then comes to a conclusion by determining whether this interim diagnosis is good for all known facts about the response of the microorganisms involved in the symptom. Once the computer finds out the cause of the infection, it is recommended to investigate the types of antibiotics available, and most of them offer prescriptions in as few alternatives as possible.

Natural language processing

Natural language processing is an artificial intelligence technology that allows a computer to understand the oral command of a human language like English. Development of natural language processing programs is also an area where progress has continued.

Most of the natural language processing software programs developed so far have been developed for questioning databases in specific fields. This software system contains information about the general misuse of grammatical and grammatical rules, as well as a vast amount of information about the meaning of terms within a defined field.

Image recognition

The ability to distinguish between graphic patterns and images is also related to artificial intelligence. Image recognition through computer programs is related to perception and abstraction. When a remote device connected to a computer reads and recognizes an image and changes it into a digital pulse shape, the shape is compared with the pulse shape stored in the computer's memory in turn.

The stored shape is a geometric shape and shape that is programmed to be recognized by the computer. The computer processes the input digital pulse shape continuously and quickly and automatically separates the related characteristics. In this process, unnecessary signals are removed, and if a shape deviates from a predetermined threshold, it is regarded as a new entity and added to the memory device.

Computer-assisted image recognition technology has been applied to various scientific fields. In astronomy, image recognition technology is used to increase the resolution of distant planets or other astronomical objects photographed by unmanned probes. There is also a robot device with shape recognition capability. Developed for industrial use, these robots are mainly used for inspection and sorting of finished products. In recent years, programs have been introduced that allow the computer to recognize and categorize images or pictures by using machine learning and deep learning.

No comments:

Post a Comment