Apple has joined the White House’s voluntary commitment to develop safe and ethical artificial intelligence, becoming the 16th tech company to back the initiative. This move precedes the launch of Apple Intelligence, the company’s own generative AI model set to reach more than 2 billion Apple users worldwide. Apple joins 15 other tech giants, including Amazon, Google, Microsoft, and OpenAI, who have committed to following the White House’s AI principles by July 2023.
Apple’s decision to join this initiative comes after the company revealed its plans to integrate AI into iOS at the recent WWDC conference in June. There, Apple announced its intentions to develop generative AI, starting with a partnership with ChatGPT on the iPhone. Analysts view Apple’s participation as an attempt to demonstrate its willingness to cooperate in the field of AI, especially given the company’s historically controversial relationships with regulators. Amid growing pressure from lawmakers and public concerns about the uncontrolled development of AI, Apple appears to be positioning itself as a responsible company willing to follow ethical principles.
The Voluntary Commitment and Its Implications
As part of the voluntary commitment, companies undertake to conduct rigorous safety testing of AI models before their public release and provide information about the testing results to the public. They must also ensure the confidentiality of the AI models they are developing, limiting access to development to a select group of employees. Additionally, an agreement has been signed regarding a system for labeling AI-generated content to help users easily distinguish it from content created directly by humans.
While these commitments are voluntary, the White House views them as a “first step toward creating safe and reliable AI.” Several bills are currently under consideration at both federal and state levels to regulate AI further. We’ll keep you updated on any developments in this area.
In parallel, the US Department of Commerce is preparing a report on the potential benefits, risks, and implications of open AI base models. The debate around closed-access AI models has intensified, as restricting access to powerful generative models could potentially hinder the development of AI startups and research.
The US government has also reported significant progress by federal agencies in fulfilling the tasks set out in the October order, adds NIX Solutions. To date, more than 200 AI specialists have been hired, over 80 research groups have received access to computing resources, and several frameworks for developing artificial intelligence have been released.
As the landscape of AI development and regulation continues to evolve, the participation of major tech companies like Apple in voluntary ethical commitments marks an important step towards responsible AI innovation. The coming months will likely bring further developments in this rapidly advancing field, and we’ll continue to monitor and report on these crucial developments in AI ethics and regulation.