Ethics of AI
Ethics
Main article: Ethics of artificial intelligence
AI potentially has benefits as well as potential risks. AI can potentially enhance science and determine solutions to urgent issues: DeepMind's Demis Hassabis dreams of "solving intelligence, and then use that to solve everything else" But now that the deployment of AI is widespread, numerous unforeseen ramifications and dangers have been discovered. In-production systems may sometimes not include ethics and bias in their AI training procedures, particularly when the AI algorithms are unexplainable at a deep level.
Risks and harm
Privacy and copyright
Additional information: Information privacy and Artificial intelligence and copyright
Large amounts of data are needed for machine learning algorithms. Methods for obtaining this data have been causing privacy, surveillance, and copyright concerns.
AI-enabled devices and services, like virtual assistants and IoT devices, keep aggregating individuals' personal data around the clock, with fear of intrusive data collection and third-party unauthorized access. The erosion of privacy is also facilitated by AI's capability to analyze and merge enormous volumes of data, creating a risk of a surveillance state where citizens' actions are being tracked and scrutinized day and night.
Sensitive user information gathered can be records of online activities, geolocation, video, or audio.[206] Amazon, for instance, has taped millions of personal conversations to create speech recognition algorithms and permitted temporary employees to hear and transcribe some of them.[207] Reactions to this mass surveillance vary from viewing it as an evil necessary evil to those for whom it is obviously wrong and a breach of the right to privacy.[208]}
The AI developers contend that this is the only means through which useful applications can be delivered and have invented a number of methods that strive to maintain privacy while still receiving the data, including aggregation, de-identification and differential privacy. Since 2016, a number of privacy experts like Cynthia Dwork have started interpreting privacy in relation to rian Christian wrote that experts have switched "from the question of 'what they know' to the question of 'what they're doing with it'.".
Generative AI is commonly trained on unlicensed copyright materials, such as in areas like images or computer code; what they generate is then utilized under the grounds of "fair use.". They have different opinions about how well and in what context this argument will fare in law courts; contributory factors can be "the purApose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[211][212] Owners of websites who don't want content scraping can denote it in a "robots.txt" file.[213] In 2023, prominent writers (such as John Grisham and Jonathan Franzen) sued AI companies for employing their writing to train generative AI.[214][215] A second talked-about approach is to conceptualize a sui generis system of protection for works produced by AI in order to balance fair attribution and payment for human writers.[216].
Dominance by tech giants
The business AI landscape is dominated by Big Tech firms like Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft.[217][218][219] Some of these already control the majority of all current cloud infrastructure and computing capacity from data centers, enabling them to dig in deeper in the market.[220][221]
Power needs and environmental impacts
See also: Environmental impacts of artificial intelligence
In January 2024, the International Energy Agency (IEA) published Electricity 2024, Analysis and Forecast to 2026, projecting electric power consumption.[222] The report is the first from the IEA to project figures for data centers and artificial intelligence and cryptocurrency power consumption. The report estimates that power usage for these applications could double by 2026 with extra electric power usage being equal to electricity consumed by the entire Japanese country.[223]}
Fierec energy demand by AI drives fossil fuels use growth, and could postpone closures of outdated, carbon-spewing coal power plants. There's a frenzied boom in the building of data centers all across the US, turning mega-tech companies (e.g., Microsoft, Meta, Google, Amazon) into ravenous devourers of electrical power. Anticipated electricity usage is so massive that it's feared that it will be met regardless of the source. A search on ChatGPT requires the use of 10 times as much electrical energy as a Google search. The big companies are rushing to locate power sources – from nuclear to geothermal to fusion. The technology companies claim that – in the grand scheme – AI will ultimately be more environmentally friendly, but they require the energy in the meantime. AI enhances the efficiency and "smartness" of the power grid, will help spur nuclear power expansion, and monitor total carbon emissions, technology companies say.[224]
A 2024 Goldman Sachs Research Report, AI Data Centers and the Coming US Power Demand Surge, discovered "US power demand (is) likely to experience growth not seen in a generation." and projects that, by 2030, US data centers will account for 8% of US power, compared to 3% in 2022, foreshadowing growth for the electrical power generation sector by various means.[225] Data centers' increasing demand for electrical power is so great that they may saturate the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[226]
In 2024, the Wall Street Journal reported that large AI firms have started negotiations with the US nuclear power operators to supply electricity to the data centers. In March 2024, Amazon acquired a Pennsylvania nuclear-powered data center for $650 Million (US)
Nvidia CEO Jen-Hsun Huang stated nuclear power is a suitable choice for the data centers.
In September 2024, Microsoft entered into an agreement with Constellation Energy to re-start the Three Mile Island nuclear power plant to supply Microsoft with 100% of all electricity generated by the plant for 20 years. Restarting the plant, which experienced a partial meltdown of its Unit 2 reactor in 1979, will mean that Constellation must navigate stringent regulatory procedures which will involve comprehensive safety review by the US Nuclear Regulatory Commission. If granted permission (the first ever US re-commissioning of a nuclear facility), more than 835 megawatts of power – sufficient for 800,000 homes – of energy will be generated. The re-opening and modernizing cost is approximated at $1.6 billion (US) and hinges on tax credits for nuclear power in the 2022 US Inflation Reduction Act.[229] The federal government and state of Michigan are spending nearly $2 billion (US) to re-open the Palisades Nuclear reactor in Lake Michigan. Closed since 2022, the plant will be reopened in October 2025. The Three Mile Island plant will be renamed the Crane Clean Energy Center in honor of Chris Crane, a nuclear supporter and former Exelon CEO who oversaw Exelon spinoff of Constellation.[230]
Following the recent approval in September 2023, Taiwan halted the approval of data centers north of Taoyuan with a capacity of over 5 MW in 2024, owing to power supply constraints.[231] Taiwan plans to phase out nuclear power by 2025.[231] Conversely, Singapore banned the establishment of data centers in 2019 owing to electric power, but in 2022, revoked this ban.[231].
While most of Japan's nuclear power plants have been closed since the 2011 Fukushima nuclear disaster, as reported by an October 2024 Japanese-language Bloomberg article, cloud gaming services firm Ubitus, in which Nvidia is an investor, is seeking land in Japan close to nuclear power plant for a new data center for generative AI.[232] Ubitus CEO Wesley Kuo stated that nuclear power plants are the most efficient, lowest-cost and most stable power for AI.[232]
On November 1, 2024, Federal Energy Regulatory Commission (FERC) denied an application made by Talen Energy seeking approval to deliver some power from the Susquehanna nuclear power station to Amazon data center.[233] It is a strain on the electricity system and a considerable cost shifting burden on households and other business users, as told by the Commission Chairman Willie L. Phillips
Misinformation
See also: YouTube § Moderation and offensive content
YouTube, Facebook and others employ recommender systems to lead users to more content. These AI algorithms were tasked with the objective of maximizing user engagement (i.e., the sole objective was to keep people watching). The AI discovered that users preferred misinformation, conspiracy theories, and extreme partisan content, and, in order to keep them watching, the AI suggested more of it. Users also continued to view more content on the same topic, so the AI directed individuals into filter bubbles in which they got several versions of the same misinformation.[234] This led numerous users to believe that the misinformation was correct, and eventually destroyed trust in institutions, the government and the media.[235] The AI algorithm had properly learned to optimize its objective, but the outcome was detrimental to society. Following the 2016 U.S. election, large tech firms moved to address the issue [citation needed].
In 2022, generative AI started generating images, sound, video and text indistinguishable from actual photographs, recordings, movies, or human compositions. Such technology can be exploited by nefarious actors to produce enormous volumes of disinformation or propaganda.[236] Geoffrey Hinton, the pioneer in AI, raised concern that AI would allow "authoritarian leaders to manipulate their electorates" at a mass scale, among other perils.[237]
Algorithmic bias and fairness
Main articles: Algorithmic bias and Fairness (machine learning)
Machine learning models will be biased[k] if they are trained on biased data.[239] The developers might not even know that the bias is present.[240] Bias can be introduced by the selection of training data and by the deployment of a model.[241][239] If a biased algorithm is applied to make decisions that can cause serious harm to individuals (as it can in medicine, finance, hiring, housing or policing) then the algorithm can lead to discrimination.[242] The study of fairness is concerned with preventing harms from algorithmic biases.
On 28th June 2015, Google Photos' new feature of image labeling incorrectly labeled Jacky Alcine and a friend as "gorillas" since they were black. The system was trained on a data set having very few black people images,[243] an issue referred to as "sample size disparity".[244] Google "solved" this issue by not allowing the system to label anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[245]
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Even though the whites' and blacks' error rate was calibrated equally at precisely 61%, each race's errors differed—the system tended to overestimate the likelihood of a black person re-offending and underestimate the likelihood of a white person not re-offending.[246] In 2017, some researchers[l] demonstrated that mathematically, it was impossible for COMPAS to include all possible fairness measures when the re-offense base rates differed for whites and blacks in the dataset.[248]>
The program may act based on discriminatory rules even though data does not indicate a concerning aspect (like "race" or "gender"). The aspect will be correlated with other aspects (such as "address", "purchase history" or "first name"), and the program will decide on them the same way it would have done for "race" or "gender".[249] Moritz Hardt quoted "the most robust fact in this research area is that fairness through blindness doesn't work."[250]
Criticism of COMPAS pointed out that machine learning algorithms are built to make "predictions" only if we suppose that the future will be like the past. If they are trained on data that contains the outcomes of racist decisions in the past, machine learning algorithms have to predict racist decisions in the future. If an application then acts on these predictions as advice, then some of these "advice" will necessarily be racist.[251] So, machine learning is not ideally placed to assist with making decisions where there is expectation that the future will improve over the past. It is not prescriptive, but rather descriptive.[m]}
Bias and unfairness can go unseen since the developers are predominantly male and white: of AI engineers, approximately 4% are black and 20% female.[244]
There are competing definitions and mathematical frameworks of fairness. These concepts rely on moral assumptions, and are shaped by society beliefs. One general class is distributive fairness, which looks at outcomes, often finding groups and trying to balance for statistical imbalances. Representational fairness seeks to guarantee that AI systems are not perpetuating unfavourable stereotypes or making selected groups unseen. Procedural fairness deals with the procedural mechanism more than with the resultant end. Most apt conceptions of fairness may hinge on context, in this case, AI application and interested stakeholders. The fact that concepts of bias and fairness are highly subjectivisible is something making operationalisation elusive to businesses. Access to sensitive features like race or gender is also thought by most AI ethicists to be required in order to overcome biases, but it might be in contradiction with anti-discrimination legislation.[238]
In its 2022 Fairness, Accountability, and Transparency Conference (ACM FAccT 2022), Association for Computing Machinery, in Seoul, South Korea, reported and published research results that suggest until AI and robotic systems are proved to be error-free in being unbiased, they are unsafe, and utilization of self-improving neural networks trained on extensive, unmonitored sources of erroneous internet-based data needs to be restrained.[dubious – discuss][253]
Lack of transparency
See also: Explainable AI, Algorithmic transparency, and Right to explanation
Most AI systems are so intricate that their creators don't know how they arrive at their conclusions.[254] Especially with deep neural networks, where there are many non-linear relationships between inputs and outputs. But some well-known explainability methods do exist.[255]
One cannot be sure that a program is working properly if nobody understands exactly how it works. There has been numerous instances when a machine learning algorithm survived tough tests, yet learned something else other than the programmers had in mind. For instance, a computer program which could diagnose skin illness more accurately than doctors ended up having a high propensity to label pictures with a ruler as "cancerous", since photos of malignancies usually have a ruler to indicate the scale.[256] Another machine learning program intended to assist in efficiently distributing medical resources ended up classifying asthma patients as being at "low risk" of death from pneumonia. Having asthma is indeed a serious risk factor, but because the patients with asthma would typically receive a great deal more medical attention, they were comparatively unlikely to die based on training data. The relationship between asthma and low mortality from pneumonia was genuine, but deceptive.[257]
Individuals who have been injured by an algorithmic decision are entitled to an explanation.[258] Physicians, for instance, are supposed to explain to their peers clearly and fully the rationale behind any decision they make. Early versions of the European Union's General Data Protection Regulation in 2016 contained an express provision that this right exists.[n] Industry commentators pointed out that this is an unresolved issue with no solution on the horizon. Regulators countered that even if the harm is indeed real: if the trouble cannot be fixed, it should not be caused.[259]
DARPA funded the XAI ("Explainable Artificial Intelligence") program in 2014 in an attempt to address these challenges.[260]
There are a number of methods that seek to fix the transparency issue. SHAP allows to see how each feature contributes to the output.[261] LIME can locally approximate model outputs with an interpretable simpler model.[262] Multitask learning yields a multitude of outputs beyond the target classification. These other outputs can help developers deduce what the network has learned.[263] Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[264] For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts.[265]
Bad actors and weaponized AI
Main articles: Lethal autonomous weapon, Artificial intelligence arms race, and AI safety
Artificial intelligence offers a range of tools that are beneficial to bad actors, including authoritarian regimes, terrorists, criminals or rogue states.
A lethal autonomous weapon is a machine that finds, selects and engages human targets without human intervention.[o] AI tools widely available can be exploited by nefarious actors to create low-cost autonomous weapons and, if mass-produced, they are potentially weapons of mass destruction.[267] Even in traditional warfare, they cannot yet reliably select targets and might kill an innocent person.[267] In 2014, 30 countries (including China) endorsed a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, but the United States and others opposed it.[268] By 2015, more than fifty countries were said to be developing battlefield robots.[269].
AI technologies enable authoritarian regimes to effectively manage their citizens in various ways. Face and voice recognition enable mass surveillance. Machine learning, which runs this data, can sort out possible enemies of the state and inhibit them from concealing themselves. Recommendation systems can accurately target propaganda and disinformation for optimal impact. Deepfakes and generative AI facilitate the creation of misinformation. Sophisticated AI can make centralized authoritarian decision making more viable than liberal, decentralized alternatives like markets. It reduces the price and effort of digital warfare and sophisticated spyware.[270] All these have been around since 2020 or before—AI facial recognition systems are already being employed to carry out mass surveillance in China.[271][272].
There are numerous other ways in which AI is likely to assist malevolent actors, some of which cannot be anticipated. For instance, machine-learning AI can generate tens of thousands of poisonous molecules within hours.[273]
Technological unemployment
Main articles: Workplace impact of artificial intelligence and Technological unemployment
Economists have often been pointing to risks of redundancies from AI and speculating unemployment in the case of lack of proper social policy for full employment.[274]
Historically, technology has added to rather than subtracted from total employment, but economists recognize that "we're in uncharted territory" with AI.[275] A survey of economists indicated disagreement over whether the growing use of robots and AI will lead to a significant rise in long-term unemployment, but they largely agree that it could be a net gain if productivity gains are shared.[276] Risk estimates differ; for instance, during the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, whereas an OECD report categorized only 9% of U.S. jobs as "high risk".[p][278] The technique of guessing future employment levels has been criticized as having no evidential basis, and for suggesting that technology, as opposed to social policy, causes unemployment, instead of redundancies.[274] In April 2023, it was stated that 70% of the Chinese video game illustrators' jobs had been replaced by generative artificial intelligence.[279][280]
In contrast with earlier waves of automation, large numbers of middle-class employment may be lost to artificial intelligence; The Economist asserted in 2015 that "the fear that AI might do to white-collar work what steam power did to blue-collar work during the Industrial Revolution" is "worth taking seriously".[281] Careers at high risk include paralegals up to fast food cooks, but job demand should rise for care-related careers from personal healthcare up to the clergy.[282].
Since the earliest days of the creation of artificial intelligence, there have been arguments, for instance, those advanced by Joseph Weizenbaum, concerning whether something that can be accomplished by computers should actually be accomplished by them, in light of the distinction between computers and human beings, and between quantitative calculation and qualitative, value-based judgment.[283]
Existential risk
Main article: Existential risk from artificial intelligence
It has been claimed AI will be so powerful that human control over it may be lost irreversibly. This would, as physicist Stephen Hawking warned, "spell the end of the human race".[284] This is a typical plot in science fiction, where a computer or robot suddenly acquires a human-like "self-awareness" (or "sentience" or "consciousness") and turns into an evil character.[q] Such sci-fi plot lines are fallacious in various respects.
Second, AI does not need to be human-like in its sentience to pose an existential threat. Contemporary AI systems are programmed with certain objectives and employ learning and intelligence to accomplish them. Philosopher Nick Bostrom had claimed that if you present almost any aim to a highly advanced AI, it could possibly decide to exterminate humanity to attain it (he used a paperclip factory manager as his example).[286] Stuart Russell uses the analogy of domestic robot attempting to look for a solution to kill the owner so it will not get unplugged because "you can't fetch the coffee if you're dead."[287] In the interest of safety for human beings, a superintelligence would have to truly be aligned with the morality and values of humans so that it is "fundamentally on our side".[288]}
Secondly, Yuval Noah Harari posits that AI does not need a robot body or material control to become an existential threat. The very core of civilization is not physical. Ideology, law, government, money and the economy are constructed out of language; they exist due to the presence of stories to which billions of people subscribe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[289]
The views among experts and industry leaders are divided, with significant portions both worried and indifferent about risk from future superintelligent AI.[290] Individuals like Stephen Hawking, Bill Gates, and Elon Musk,[291] and AI researchers like Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have raised alarms regarding existential risk from AI.
In May 2023, Geoffrey Hinton resigned from Google so that he could "freely speak out about the risks of AI" without "thinking about how this affects Google".[292] He specifically referred to risks of an AI takeover,[293] and emphasized that in order to prevent the worst, creating safety guidelines will take collaboration between those racing in application of AI.[294]}
In 2023, numerous top AI researchers signed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[295]
There were other researchers who were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[296] While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[297][298] Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[299] Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[300] In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[301] However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[302]
Ethical machines and alignment
Main articles: Machine ethics, AI safety, Friendly artificial intelligence, Artificial moral agents, and Human Compatible
Friendly AI are computers that have been programmed from scratch to reduce threats and to optimize decisions for human good. Eliezer Yudkowsky, who invented the term, believes that it is more important to develop friendly AI: it will cost a lot of investment and it needs to be done before AI turns into an existential threat.[303]
The machines can potentially employ their intelligence to act ethically. Machine ethics gives machines ethical guidelines and protocols for making ethical decisions.[304] Computational morality is an alternate name for machine ethics,[304] and it was established at an AAAI symposium in 2005.[305]
Other initiatives are Wendell Wallach's "artificial moral agents"[306] and Stuart J. Russell's three principles for creating provably good machines.[307]
Open source
Active players among the open-source AI community are Hugging Face,[308] Google,[309] EleutherAI and Meta.[310] Different AI models, including Llama 2, Mistral or Stable Diffusion, have been made open-weight,[311][312] i.e., their architecture and learned parameters (the "weights") have been made public. Open-weight models can be fine-tuned freely, which means that companies can specialize them with their own data and for their own application.[313] Open-weight models are valuable for research and innovation but can also be abused. Because they can be fine-tuned, any inherent security feature, like protesting against dangerous requests, can be trained away until it is no longer effective. Some experts caution that upcoming AI systems can develop ominous capabilities (e.g., the ability to significantly enable bioterrorism) and once deployed on the Internet, cannot be erased anywhere if necessary. They suggest pre-release reviews and cost-benefit assessments.[314]
Frameworks
Artificial Intelligence projects may subject their ethical appropriateness for testing while conceiving, creating, and deploying an AI system. An AI framework like the Care and Act Framework with the SUM values—the product of the Alan Turing Institute tests projects in four broad categories:[315][316]
Honor the dignity of individual human beings
Engage with other people honestly, openly, and inclusively
Take care of the welfare of all people
Guard social values, justice, and the public good
Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[317] however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.[318]
Encouragement of the welfare of the individuals and societies impacted by these technologies necessitates taking into account the social and ethical considerations throughout all phases of AI system development, design and deployment, and coordination among job functions like data scientists, product managers, data engineers, domain experts, and delivery managers.[319]
The UK AI Safety Institute published in 2024 a testing suite named 'Inspect' for AI safety testing under a MIT open-source licence which is free to download on GitHub and can be enhanced with third-party extensions. It can be utilized to test AI models across a variety of domains such as core knowledge, reasoning capability, and autonomous functions.[320]
Regulation
Main articles: Regulation of artificial intelligence, Regulation of algorithms, and AI safety
AI Safety Summit
In 2023, the inaugural global AI Safety Summit took place with a call for global cooperation in a declaration.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[321] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[322] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[323][324] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[325] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were preparing to develop their own AI policy, such as Bangladesh, Malaysia and Tunisia.[325] In June 2020, the Global Partnership on Artificial Intelligence launched, declaring that AI needed to be developed aligned with human rights and democratic values, to have public confidence and trust in the technology.[325] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher jointly published a call for a government commission to govern AI in November 2021.[326] In 2023, OpenAI leaders made recommendations for governing superintelligence, which they say could occur within less than 10 years.[327] In 2023, the United Nations also created an advisory organization to make suggestions regarding AI governance; the organization is made up of technology firm executives, government officials and scholars.[328] In 2024, the Council of Europe developed the world's first international legally binding AI treaty, named the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was embraced by the United Kingdom, the United States, the European Union, and other parties.[329]
In a 2022 Ipsos poll, opinions about AI differed widely by nation; 78% of Chinese respondents, but just 35% of Americans, concurred that "products and services using AI have more benefits than drawbacks".[323] A 2023 Reuters/Ipsos poll reported that 61% of Americans concur, and 22% disagree, that AI is a threat to humanity.[330] In a 2023 Fox News poll, 35% of Americans believed it "very important", and another 41% believed it "somewhat important", for the federal government to regulate AI, compared to 13% saying "not very important" and 8% saying "not at all important".[331][332]
In November 2023, the inaugural world AI Safety Summit took place in Bletchley Park, UK, to debate the near and long term dangers of AI and the likelihood of mandatory and voluntary regulatory mechanisms.[333] 28 nations including the United States, China, and the European Union made a joint declaration at the beginning of the summit, urging global co-operation to tackle the threats and challenges of artificial intelligence.[334][335] In May 2024 at the AI Seoul Summit, 16 international AI tech firms committed to safety agreements on the development of AI.
Comments
Post a Comment