Artificial intelligence

Artificial intelligence

Artificial intelligence (AI), in the context of computer science, is a discipline and a set of cognitive and intellectual capabilities expressed by computer systems or combinations of algorithms whose purpose is the creation of machines that imitate human intelligence to perform tasks, and that they can improve as they gather information

It became present shortly after the Second World War with the development of the Turing test, while the phrase was coined in 1956 by computer scientist John McCarthy at the Dartmouth Conference

Today, artificial intelligence covers a wide variety of subfields

These range from general purpose areas, learning and perception, to more specific areas such as speech recognition, playing chess, proving mathematical theorems, writing poetry and diagnosing diseases

Artificial intelligence synthesizes and automates tasks that are in principle intellectual and is therefore potentially relevant to any area of ​​human intellectual activities

In this sense, it is a genuinely universal field

The architecture of artificial intelligence and the processes by which they learn, improve and are implemented in some area of ​​interest varies depending on the usefulness approach that you want to give them

But in general, these range from the execution of simple algorithms to the interconnection of complex artificial neural networks that try to replicate the neural circuits of the human brain and that learn through different learning models such as machine learning, reinforcement learning, deep learning and supervised learning

On the other hand, the development and application of artificial intelligence in many aspects of daily life has also led to the creation of new fields of study such as roboethics and machine ethics that address aspects related to ethics in artificial intelligence

They are responsible for analyzing how advances in this type of technology would impact various areas of life, as well as the responsible and ethical management that should be given to them, in addition to establishing what should be the correct way to proceed. machines and the rules they should follow

Regarding its classification, artificial intelligence is traditionally divided into weak artificial intelligence, which is the only one that currently exists and is responsible for carrying out specific tasks, and general artificial intelligence, which would be an AI that exceeds the capabilities human

Some experts believe that if this level is ever reached, it could lead to the emergence of a technological singularity, that is, a superior technological entity that would constantly improve itself, becoming uncontrollable by humans, giving rise to theories such as Roko's basilisk

Some of the most well known and currently used artificial intelligences around the world include artificial intelligence in the field of health, virtual assistants such as Alexa, Google Assistant or Siri, automatic translators such as Google Translate and DeepL, recommendation systems such as that of the YouTube digital platform, chess engines and other games such as Stockfish and AlphaZero, chatbots such as ChatGPT, creators of artificial intelligence art like Midjourney, Dall-e, Leonardo and Stable Diffusion, and even driving autonomous vehicles like Tesla Autopilot

Likewise, artificial intelligence is being developed on the digital platform more and more, evolving and creating new tools, such as the labor platform that has existed since 2023 called SIVIUM

A tool through which a person applies automatically to all job offers from all job portals, without having to review each job offer that is presented and send their CV one by one

Description

In 2019, the UNESCO World Commission on Ethics of Scientific Knowledge and Technology (COMEST) defined artificial intelligence as:

Field that involves machines capable of imitating certain functionalities of human intelligence, including characteristics such as perception, learning, reasoning, problem solving, linguistic interaction, and even the production of creative works

Colloquially, the locution artificial intelligence applies when a machine imitates the functions cognitive that humans associate as human competencies, for example: perceive, reason, learn and solve problems

Andreas Kaplan and Michael Haenlein define artificial intelligence as:

The ability of a system to correctly interpret external data, and thus learn and use that knowledge to achieve specific tasks and goals through flexible adaptation

As machines become increasingly capable, technology once thought to require intelligence is removed from the definition

Marvin Minsky, one of the creators of AI, spoke of the term artificial intelligence as a suitcase word because a variety of elements can be put in it

For example, optical character recognition is no longer perceived as an example of artificial intelligence, having become a common technology

Technological advances still classified as artificial intelligence are autonomous driving systems or those capable of playing chess or Go

Artificial intelligence is a new way of solving problems, which includes expert systems, the management and control of robots and processors, which attempts to integrate knowledge into such systems, in other words, an intelligent system capable of writing its own program

An expert system defined as a programming structure capable of storing and using knowledge about a specific area that translates into its learning capacity

In the same way, AI can be considered as the ability of machines to use algorithms, learn from data and use what they learn in making decisions just as a human being would do

According to Takeyas (2007)

AI is a branch of computer science responsible for studying computing models capable of carrying out activities typical of human beings based on two of their primary characteristics: reasoning and behavior

In 1956, John McCarthy coined the term artificial intelligence, and defined it as:

The science and ingenuity of making intelligent machines, especially intelligent computer programs

There are also different types of perceptions and actions, which can be obtained and produced, respectively, by physical sensors and mechanical sensors in machines, electrical or optical pulses in computers, as well as by bit inputs and outputs of software and its software environment

Several examples are in the area of ​​system control, automatic planning, the ability to respond to diagnostics and consumer inquiries, handwriting recognition, speech recognition and pattern recognition

AI systems are currently part of the routine in fields such as economics, medicine, engineering, transportation, communications and the military, and have been used in a wide variety of computer programs, strategy games, such as computer chess, and other video games

Types

Stuart J. Russell and Peter Norvig differentiate several types of artificial intelligence:

  • Systems that think like humans: These systems try to emulate human thinking; for example, artificial neural networks. The automation of activities that we link with human thought processes, activities such as decision making, problem solving and learning
  • Systems that act like humans: These systems try to act like humans; that is, they imitate human behavior; for example, robotics (The study of how to get computers to perform tasks that, at the moment, humans do better)
  • Systems that think rationally: That is, with logic (ideally), they try to imitate the rational thinking of human beings; for example, expert systems, (the study of the calculations that make it possible to perceive, reason and act)
  • Systems that act rationally: They try to rationally emulate human behavior; for example, intelligent agents, which is related to intelligent behaviors in artifacts

Generative artificial intelligence

Generative artificial intelligence is a type of artificial intelligence system capable of generating text, images or other media in response to commands

Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics

Notable generative AI systems include ChatGPT (and its Microsoft Copilot variant), a chatbot built by OpenAI using its foundational large language models GPT-3 and GPT-4; and Bard, a chatbot built by Google using Gemini

Other generative AI models include AI art systems like Stable Diffusion, Midjourney, and DALL-E

Strong artificial intelligence

Strong Artificial Intelligence (SAI) is a hypothetical type of artificial intelligence that equals or exceeds average human intelligence

If it became a reality, an SAI could learn to perform any intellectual task that humans or animals can perform

Alternatively, the SAI has been defined as an autonomous system that exceeds human capabilities in most economically valuable tasks

Some argue that it could be possible in years or decades; others, that it could take a century or more; and a minority believes that it may never be achieved

There is debate about the exact definition of SAI and whether modern large language models (LLMs), such as GPT-4, are early but incomplete forms of SAI

Explainable artificial intelligence

Explainable artificial intelligence refers to methods and techniques in the application of artificial intelligence technology by which humans are able to understand the decisions and predictions made by artificial intelligence

Friendly artificial intelligence

Friendly artificial intelligence is a hypothetical strong AI that can have a positive rather than a negative effect on humanity

Friendly is used in this context as technical terminology and chooses agents that are safe and useful, not necessarily those that are friendly in the colloquial sense

The concept is invoked primarily in the context of discussions of artificial recursive self-enhancing agents that rapidly explode into intelligence, with the argument that this hypothetical technology could have a long, fast and difficult task of controlling the impact on human society

Multimodal artificial intelligence

Multimodal artificial intelligence is a type of artificial intelligence that can process and integrate data from different modalities, such as text, images, audio and video, to obtain a more complete and contextualized understanding of a situation

Multimodal artificial intelligence is inspired by the way humans use multiple senses to perceive and interact with the world, offering a more natural and intuitive way to communicate with technology

Quantum artificial intelligence

Quantum artificial intelligence is an interdisciplinary field that focuses on building quantum algorithms to improve computational tasks within AI, including subfields such as machine learning

There is evidence that shows a possible quantum quadratic advantage in fundamental AI operations

In order to represent quantum information, the basic unit of the Qubits

Schools of thought

AI is divided into two schools of thought:

Conventional artificial intelligence

It is also known as symbolic-deductive AI

It is based on the formal and statistical analysis of human behavior when faced with different problems:

  • Case-based reasoning: It helps to make decisions while solving certain specific problems and, apart from being very important, they require good functioning
  • Expert systems: They infer a solution through prior knowledge of the context in which certain rules or relationships are applied and used
  • Bayesian networks: Propose solutions through probabilistic inference
  • Behavior-based artificial intelligence: This intelligence contains autonomy, that is, it can self-regulate and control itself to improve
  • Smart process management: Facilitates complex decision making, proposing a solution to a given problem just as a specialist in said activity would do

Computational artificial intelligence

Computational artificial intelligence (also known as subsymbolic-inductive AI) involves interactive development or learning (for example, interactive modifications of parameters in connection systems)

The knowledge is achieved based on empiric facts

Computational artificial intelligence has a dual purpose

On the one hand, its scientific objective is to understand the principles that enable intelligent behavior (whether in natural or artificial systems) and, on the other, its technological objective is to specify the methods for designing intelligent systems

History

The expression artificial intelligence it was formally coined in 1956 during the Dartmouth Conference, but by then it had already been worked on for five years in which many different definitions had been proposed that in no case had managed to be fully accepted by the research community

AI is one of the most recent disciplines along with modern genetics

The most basic ideas date back to the ancient Greeks

Aristotle (384-322 BC) was the first to describe a set of rules that describe a part of the functioning of the mind to obtain rational conclusions, and Ctesibius of Alexandria (250 BC) built the first self-controlled machine, a water flow regulator (rational but without reasoning)

In 1315 Ramon Llull in his book Ars magna had the idea that reasoning could be carried out artificially

In 1840 Ada Lovelace foresaw the ability of machines to go beyond simple calculations and provided the first idea of ​​what software would be

Leonardo Torres Quevedo (1852-1936) is considered one of the fathers of artificial intelligence and automation

In 1936 Alan Turing formally designed a Universal Machine that demonstrated the feasibility of a physical device to implement any formally defined computation

In 1943 Warren McCulloch and Walter Pitts presented their model of artificial neurons, which is considered the first work in the field, even though the term did not yet exist. The first important advances began in the early 1950s with the work of Alan Turing, from which science has gone through various situations

In 1955, Herbert Simon, Allen Newell and Joseph Carl Shaw developed the first problem-solving programming language, IPL-11

A year later they developed the LogicTheorist, which was capable of proving mathematical theorems

In 1956 the expression was devised artificial intelligence by John McCarthy, Marvin Minsky and Claude Shannon at the Dartmouth Conference, a conference in which triumphalist ten-year predictions were made that were never fulfilled, leading to the almost total abandonment of research for fifteen years

In 1957 Newell and Simon continued their work with the development of the General Problem Solver (GPS)

GPS was a problem solving system

In 1958 John McCarthy developed LISP at the Massachusetts Institute of Technology (MIT)

Its name is derived from LISt Processor

LISP was the first language for symbolic processing

In 1959 Rosenblatt introduced the perceptrón

In the late 1950s and early 1960s Robert K. Lindsay developed Sad Sam, a program for reading sentences in English and inferring conclusions from their interpretation

In 1963 Quillian developed semantic networks as a knowledge representation model

In 1964 Bertrand Raphael built the SIR (Semantic Information Retrieval) system which was capable of inferring knowledge based on information provided to it

Also in 1964, Daniel G. Bobrow developed STUDENT as his doctoral thesis

STUDENT was a program written in Lisp that read and solved algebra problems

In the mid-60s, expert systems appeared, which predict the probability of a solution under a set of conditions

For example:

  • DENDRAL: initiated in 1965 by Buchanan, Feigenbaum and Lederberg, the first Expert System, which assisted chemists in complex chemical structures
  • MACSYMA: which assisted engineers and scientists in solving complex mathematical equations

Later, between 1968-1970, Terry Winograd developed the SHRDLU system, which allowed interrogating and giving orders to a robot that moved within a world of blocks

In 1968 Marvin Minsky published Semantic Information Processing

Also in 1968 Seymour Papert, Danny Bobrow and Wally Feurzeig developed the LOGO programming language

In 1969 Alan Kay developed the Smalltalk language at Xerox PARC and it was published in 1980

In 1973 Alain Colmenauer and his research team at the University of Aix-Marseille created PROLOG (from the French PROgrammation in LOGique) a programming language widely used in AI

In 1973 Shank and Abelson developed scripts, pillars of many current techniques in artificial intelligence and computing in general

In 1974 Edward Shortliffe wrote his thesis with MYCIN, one of the best-known Expert Systems, which assisted doctors in the diagnosis and treatment of blood infections

In the 1970s and 1980s, the use of expert systems grew, such as MYCIN: R1/XCON, ABRL, PIP, PUFF, CASNET, INTERNIST/CADUCEUS, etc

Some remain to this day (Shells) such as EMYCIN, EXPERT, OPSS

In 1981 Kazuhiro Fuchi announced the Japanese project for the fifth generation of computers

In 1986 McClelland and Rumelhart published Parallel Distributed Processing (Neural Networks)

In 1988, Object Oriented languages ​​were established

In 1997 Gari Kasparov, world chess champion, loses to the autonomous computer Deep Blue

In 2006 the anniversary was celebrated with the Congress in Spanish 50 years of artificial intelligence (Campus Multidisciplinar en Percepción e Inteligencia 2006)

In 2009, there were already intelligent therapeutic systems in development that allow detecting emotions to be able to interact with autistic children

In 2011, IBM developed a supercomputer called Watson, which won a round of three games in a row of Jeopardy!, beating its two top champions, and winning a $1 million prize that IBM then donated to charity

In 2016, a computer program beat the three-time European Go champion five to zero

Also in 2016, then-President Obama talks about the future of artificial intelligence and technology

In the chats, chatbots appear that dialogue with people and they do not realize that they are talking to a program

Which proves that the Turing test is fulfilled as when it was formulated:

Artificial intelligence will exist when we are not able to distinguish between a human being and a computer program in a blind conversation

In 2017, AlphaGo developed by DeepMind defeated world champion Lee Sedol 4-1 in a Go competition

This event received a lot of media attention and marked a milestone in the history of this game

At the end of that same year, Stockfish, the chess engine considered the best in the world with 3,400 ELO points

He was overwhelmingly defeated by AlphaZero just by knowing the rules of the game and after only 4 hours of training playing against himself

Anecdotally, many AI researchers argue that:

Intelligence is a program capable of being executed independently of the machine that executes it, computer or brain

In 2018, the first television with artificial intelligence is launched by LG Electronics with a platform called ThinQ

In 2019, Google presented its Doodle in which, with the help of artificial intelligence, it pays tribute to Johann Sebastian Bach, in which, by adding a simple two-bar melody, the AI ​​creates the rest

In 2020, the OECD (Organization for Economic Cooperation and Development) publishes the working document entitled Hello world: Artificial intelligence and its use in the public sector, aimed at government officials with the aim of highlighting the importance of AI and its practical applications in the government sphere

At the end of the year 2022, it was launched ChatGPT, a generative artificial intelligence capable of writing texts and answering questions in many languages

Since the quality of the answers was initially reminiscent of the human level, global enthusiasm for AI was generated and ChatGPT reached more than 100 million users two months after its launch

Later, experts noted that ChatGPT provides misinformation in areas where you have no knowledge (“data hallucinations”), which at first glance seems credible due to its perfect wording

In 2023, AI-generated photos reached a level of realism that made them look like real photos

As a result, there was a wave of AI-generated “photos” that many viewers believed were real

An image generated by Midjourney stood out, showing Pope Francis in a stylish white winter coat

Social, ethical and philosophical implications

Faced with the possibility of creating machines endowed with intelligence, it became important to be concerned with the ethical question of machines in order to try to ensure that no harm is caused to human beings, to other living beings and even to the machines themselves according to some schools of thought

This is how a broad field of studies known as the ethics of artificial intelligence emerged, relatively recently emerging and generally divided into two branches, roboethics, responsible for studying the actions of human beings towards robots, and the ethics of machines in charge of studying the behavior of robots towards human beings

The accelerated technological and scientific development of artificial intelligence that has occurred in the 21st century also has a significant impact on other fields

In the world economy, during the second industrial revolution, a phenomenon known as technological unemployment was experienced, which refers to when industrial automation of large scale production processes replaces human labor

A similar phenomenon could occur with artificial intelligence, especially in the processes in which human intelligence intervenes, as illustrated in the story How much fun they had! of Isaac Asimov

In it, its author glimpses some of the effects that the interaction of intelligent machines specialized in children's pedagogy, instead of human teachers, would have with children in school

This same writer designed what are known today as the three laws of robotics, which appeared for the first time in his story Runaround of 1942, where it established the following:

  • First Law: A robot will not harm a human being nor allow a human being to come to harm
  • Second Law A robot must follow orders given by humans, except for those that conflict with the first law
  • Third Law A robot must protect its own existence to the extent that this protection does not conflict with the first or second law

Other science fiction works in the cinema

Goals

Reasoning and problem solving

Early researchers developed algorithms that mimicked the step-by-step reasoning that humans use when solving puzzles or making logical deductions

By the late 1981-1990s, artificial intelligence research had developed methods for dealing with uncertain or incomplete information, employing concepts of probability and economics

These algorithms proved insufficient to solve large reasoning problems because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew

In this way, it was concluded that humans rarely use the step-by-step deduction that early artificial intelligence research followed; instead, they solve most of their problems using quick, intuitive judgments

Knowledge representation

Knowledge representation and knowledge engineering are fundamental to classical artificial intelligence research

Some “expert systems” attempt to compile the knowledge that experts have in a specific field

Additionally, other projects attempt to assemble the “common sense knowledge” known to the average person into a database that contains extensive knowledge about the world

Among the topics that a common sense knowledge base would contain are: objects, properties, categories and relationships between objects, situations, events, states and time, causes and effects; and knowledge about knowledge (what we know about what what other people know) among others

Planning

Another objective of artificial intelligence is to be able to set goals and finally achieve them

To do this, they need a way to visualize the future, a representation of the state of the world, and be able to make predictions about how their actions will change it, in order to make decisions that maximize the utility (or “value”) of the available options

In classical planning problems, the agent can assume that it is the only system that acts in the world, which allows it to be sure of the consequences of its actions

However, if the agent is not the only actor, then it is required that the agent be able to reason under uncertainty.

This requires an agent that can not only evaluate its environment and make predictions, but also evaluate its predictions and adapt based on its evaluation

Multi-agent planning uses the cooperation and competition of many systems to achieve a given goal

Emergent behavior like this is used by evolutionary algorithms and swarm intelligence

Learning

Machine learning is a fundamental concept of artificial intelligence research since the beginning of studies in this field; consists of the research of computer algorithms that improve automatically through experience

Unsupervised learning is the ability to find patterns in an input stream, without requiring a human to label the inputs first

Supervised learning includes classification and numerical regression, which requires a human to first label the input data

Classification is used to determine which category something belongs to and occurs after a program looks at several examples of input from various categories

Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change

Both classifiers and regression learners try to learn an unknown function

For example, a spam classifier can be seen as learning a function that assigns the text of an email to one of two categories, "spam" or "non-spam"

Computational learning theory can assess students by computational complexity, sample complexity (how much data is required), or by other notions of optimization

The world is constantly evolving, and tools like ChatGPT are at the center of this transformation

While many people see ChatGPT as an opportunity to improve their business or personal experience, there are those who are skeptical about its implementation

Natural language processing

Natural language processing allows machines to read and understand human language

A sufficiently effective natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as news texts

Some simple applications of natural language processing include information retrieval, text mining, question answering, and machine translation

Many approaches use word frequencies to construct syntactic representations of text

"Keyword detection" search strategies are popular and scalable, but sub-optimal; A search query for "dog" can only match documents that contain the literal word "dog" and lose a document with the word "poodle"

Statistical language processing approaches can combine all of these strategies, as well as others, and often achieve acceptable accuracy at the page or paragraph level

Beyond the processing of semantics, the ultimate goal of this is to incorporate a complete understanding of common sense reasoning

In 2019, transformer-based deep learning architectures could generate coherent text

Perception

Machine perception is the ability to use input from sensors (such as visible or infrared cameras, microphones, wireless signals, and lidar, sonar, radar, and touch sensors) to understand aspects of the world

Applications include voice recognition, facial recognition and object recognition

Computer vision is the ability to analyze visual information, which is often ambiguous

For example, a fifty-meter-tall giant pedestrian very far away can produce the same pixels as a normal-sized pedestrian nearby

Which requires artificial intelligence to judge the relative probability and reasonableness of different interpretations

For example, defining and using an “object model” to evaluate that fifty meter pedestrians do not exist