PHILOSPHY OF ARTIFICIAL INTELLIGENCE

              PHILOSPHY OF ARTIFICIAL                                          INTELLIGENCE



  Philosophy
Main article: Philosophy of artificial intelligence
Philosophical discussions have traditionally tried to answer the question of what intelligence is and how intelligent machines can be created.[375] Another significant area of interest has been whether or not machines can be conscious, and the resultant ethical considerations.[376] Numerous other areas of philosophy are applicable to AI, including epistemology and free will.[377] Fast-paced progress has heightened public debate on the philosophy and ethics of AI.[376]

Defining artificial intelligence
See also: Turing test, Intelligent agent, Dartmouth workshop, and Synthetic intelligence
Alan Turing in 1950 had written "I propose to consider the question 'can machines think'?"[378] He suggested modifying the question from whether a machine "thinks," to "whether or not it is possible for machinery to show intelligent behaviour".[378] The Turing test, which tests the power of the machine to mimic human conversation, was created by him.[342] As we can only see what the machine does, it doesn't matter if it is "really" thinking or actually possesses a "mind". Turing states that we cannot know these sorts of things about other individuals but "it is usual to have a polite convention that everyone thinks." [379]

The Turing test can furnish some indication of intelligence, yet it punishes non-human smart behavior.[380]
Russell and Norvig concur with Turing that there must be external behavioral definitions for intelligence, as opposed to inner structure.[1] They object, however, that the test asks the machine to mimic people. "Aeronautical engineering texts," they said, "do not describe the objective of their discipline as producing 'machines that fly so exactly like pigeons that they can deceive other pigeons.'"[381] AI founder John McCarthy concurred, stating that "Artificial intelligence is not, by definition, simulation of human intelligence".[382]

McCarthy has defined intelligence as "the computational part of the ability to achieve goals in the world".[383] Another founder of AI, Marvin Minsky, also defines it as "the ability to solve hard problems".[384] The best AI textbook has defined it as the study of agents that perceive their environment and act so as to maximize their chances of successfully achieving some set of goals.[1] These definitions consider intelligence in terms of problems with well-defined solutions, where both the complexity of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical debate is necessary, or might not even be possible.

Another definition has been embraced by Google,[385] one of the biggest players in the world of AI. This definition imposes the capability of systems to generate information as an expression of intelligence, much as it is formulated in biological intelligence.

Others have proposed in practice, that the term AI is imprecise and hard to define, with debate about whether traditional algorithms should be classed as AI,[386] with numerous companies in the early 2020s AI bubble employing the term as a marketing slogan, often even when they did "not actually use AI in a material way".[387]

Assessing approaches to AI
No settled unifying theory or paradigm has dominated AI research for the majority of its existence.[aa] The record-breaking success of statistical machine learning during the 2010s overshadowed all other methods (so much so that some sources, particularly in business, use the term "artificial intelligence" to refer to "machine learning with neural networks"). This method is primarily sub-symbolic, soft and narrow. Critics say that these questions might need to be readdressed by future generations of AI researchers.



Symbolic AI and its limitations
Symbolic AI (or "GOFAI")[389] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[390]

However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the finding that high-level "intelligent" activities were simple for AI, but low-level "instinctive" activities were very hard.[391] Philosopher Hubert Dreyfus had been contending since the 1960s that human proficiency relies on unconscious instinct instead of conscious symbol manipulation, and on possessing a "feel" for the situation, instead of explicit symbolic knowledge.[392] Even though his arguments had been ridiculed and disregarded when originally put forward, ultimately, AI research came to concur with him.[ab][16]]

The problem remains: sub-symbolic reasoning may be able to make many of the same incomprehensible errors that human intuition makes, like algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[394][395] in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.

Neat vs. scruffy
Main article: Neats and scruffies
"Neats" believe that intelligent behavior is explained in terms of simple, elegant principles (e.g., logic, optimization, or neural networks). "Scruffies" anticipate that it must involve the solution of a large number of loosely coupled problems. Neats justify their programs with theoretical rigor, scruffies use mostly incremental testing to determine whether they work. This problem was debated actively during the 1970s and 1980s,[396] but ultimately was regarded as irrelevant. Contemporary AI has aspects of both.

Soft and hard computing
Main article: Soft computing
Proving to be correct or optimal is unfeasible for many significant problems.[15] Soft computing is a collection of techniques, such as genetic algorithms, fuzzy logic and neural networks, which tolerate imprecision, uncertainty, partial truth and approximation. Soft computing emerged in the late 1980s and the majority of successful AI software in the 21st century are instances of soft computing using neural networks.

Narrow vs. general AI
Main articles: Weak artificial intelligence and Artificial general intelligence
AI researchers disagree about whether to strive directly for the objectives of artificial general intelligence and superintelligence or to address as many particular problems as possible (narrow AI) in the hope that these solutions will indirectly contribute to the long-term objectives of the field.[397][398] General intelligence is hard to define and hard to quantify, and contemporary AI has had more verifiable success by working on particular problems with particular solutions. The sub-field of artificial general intelligence addresses this topic exclusively.

Machine consciousness, sentience, and mind
Main articles: Philosophy of artificial intelligence and Artificial consciousness
The philosophy of mind has no idea whether or not a machine can possess a mind, consciousness and mental states, as human beings do. This question is concerned with the internal states of the machine, as opposed to its outward behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[399] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.

Consciousness
Main articles: Hard problem of consciousness and Theory of mind
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[400] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is how this is experienced or why it is supposed to feel like something at all, granted we are correct in assuming that it actually does feel like something (Dennett's consciousness illusionism asserts that this is an illusion). Human information processing is easy to explain, but human subjective experience is hard to explain. For instance, it is simple enough to conceive of a color-blind individual who can tell which items in his field of vision are red, but it is hard to say what would have to happen for him to know what red is.[401]

Computationalism and functionalism
Main articles: Computational theory of mind and Functionalism (philosophy of mind)
Computationalism is the view within the philosophy of mind that the human mind is an information-processing system and thinking is a species of computing. Computationalism asserts that the relation between mind and body is just like or is the same as the relation between software and hardware and therefore might be a resolution to the mind–body problem. This philosophical stance was motivated by the research of AI scientists and cognitive scientists in the 1960s and was first advanced by philosophers Jerry Fodor and Hilary Putnam.[402]

Philosopher John Searle described this viewpoint as "strong AI": "The suitably programmed computer with the suitable inputs and outputs would have a mind in just the same way that human beings have minds." Searle refutes this assertion with his Chinese room argument, which tries to demonstrate that even a perfectly human-behavior-simulating computer would lack a mind.[406]

AI welfare and rights
It is hard or impossible to properly assess whether a powerful AI is sentient (capable of experiencing feelings), and if it is, to what extent.[407] But if a strong probability exists that a particular machine can feel and suffer, then it might be worthy of some rights or welfare-protection provisions, like animals.[408][409] Sapience (a combination of abilities concerned with high intelligence, like discrimination or self-awareness) would be another potential moral foundation for AI rights.[408] Robot rights are occasionally advocated as a practical means of assimilating autonomous agents into society.[410].

In 2017, the European Union debated granting "electronic personhood" to some of the most advanced AI systems. Like the legal status of corporations, it would have granted rights but also obligations.[411] Critics in 2018 said that giving rights to AI systems would diminish the significance of human rights, and that laws should address user requirements and not hypothetical future situations. They also added that robots did not have the independence to participate to society by themselves.[412][413]

AI progress made the subject more popular. AI welfare and rights advocates frequently contend that AI sentience, if it arises, would be especially simple to disavow. They caution that this could prove a moral blind spot comparable to slavery or factory farming, potentially resulting in widespread suffering if sentient AI is developed and carelessly abused




Comments

Popular posts from this blog

WHAT IS AI?

HISTORY OF ARTIFICIAL INTELLIGENCE

AI and Privacy