AI can write a wedding toast or summarize a paper. But what happens when it’s asked to build a bomb?

Date:

This happens when users manipulate the model’s input prompts to bypass ethical or safety guidelines, asking a question in a coded language that the librarian can’t help but answer, revealing information it’s supposed to keep private.

Robey also speaks to the broader implications of AI safety, stressing the need for comprehensive policies and practices. Ensuring the safe deployment of AI technologies is crucial,” he says. “We need to develop policies and practices that address the continually evolving space of threats to LLMs.”

Share post:

Subscribe

Popular

More like this
Related

The Cambridge raduate student rewriting Deaf histories and disability histories

Disability history has been emerging since the 1990s as...

What you need to know about JN.1, the latest COVID variant

In early November 2023, the JN.1 variant caused less...

Could an electric nudge to the head help your doctor operate a surgical robot?

The findings offer the first glimpse of how stimulating...