- Bletchley Park in Buckinghamshire near London was once the top-secret base of the codebreakers who cracked the German ‘Enigma Code’ that hastened the end of World War II.
- This symbolism was evidently a reason why it was chosen to host the world’s first ever Artificial Intelligence (AI) Safety Summit.
The Bletchley Park Declaration
- “Frontier AI” is defined as highly capable foundation generative AI models that could possess dangerous capabilities that can pose severe risks to public safety.
- The declaration, which was also endorsed by Brazil, Ireland, Kenya, Saudi Arabia, Nigeria, and the United Arab Emirates, incorporates an acknowledgment of the substantial risks from potential intentional misuse or unintended issues of control of frontier AI — especially cybersecurity, biotechnology, and disinformation risks, according to the UK government, the summit host.
- The declaration noted the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, as well as risks beyond frontier AI, including those of bias and privacy.
- These risks are “best addressed through international cooperation”, the Bletchley Park Declaration said. As part of the agreement on international collaboration on frontier AI safety, South Korea will co-host a mini virtual AI summit in the next six months, and France will host the next in-person summit within a year from now.
India stand on AI
- India has been progressively pushing the envelope on AI regulation.
- In August 2023, less than two weeks before the G20 Leaders Summit in New Delhi, the Prime Minister of India had called for a global framework on the expansion of “ethical” AI tools.
- This statement put a stamp of approval at the highest level on the shift in New Delhi’s position from not considering any legal intervention on regulating AI in the country to a move in the direction of actively formulating regulations based on a “risk-based, user-harm” approach.
- Part of this shift was reflected in a consultation paper floated by the apex telecommunications regulator Telecom Regulatory Authority of India (TRAI) earlier in July 2023, which said that the Centre should set up a domestic statutory authority to regulate AI in India through the lens of a “risk-based framework” , while also calling for collaborations with international agencies and governments of other countries for forming a global agency for the “responsible use” of AI.