The UK government has announced that the AI Safety Institute will open an office in San Francisco. This is said to be the first overseas office in San Francisco and is described as a “pivotal step” that will help the UK engage with the emerging AI [artificial intelligence] industry in the US.
AI safety is said to be a key priority for the UK as it continues to develop the technology.
The SF office is designed to complement the Institute’s London HQ, which currently has a team of over 30 technical staff.
The UK Secretary of State for Science and Technology Michelle Donelan said:
“This expansion represents British leadership in AI in action. It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.
The Institute sees several areas of risk for AI, including the following:
- Several models completed cyber security challenges, while struggling to complete more advanced challenges.
- Several models demonstrate similar to PhD-level knowledge of chemistry and biology.
- All tested models remain highly vulnerable to basic “jailbreaks”, and some will produce harmful outputs even without dedicated attempts to circumvent safeguards.
- Tested models were unable to complete more complex, time-consuming tasks without humans overseeing them.
The Institute reports that safety tests have already been carried out this year on five publicly available large language models (LLMs).