>In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
Brother, I wish. The word "often" is load-bearing in that sentence, and it's not up to the task. TFA doesn't justify the claim whatsoever; surely Mr. Zeff could glean some sort of evidence from the 120 page PDF from Anthropic, if any evidence exists. Moreover, I would've expected Anthropic to convey the existence of such evidence to Mr. Zeff, again, if such evidence exists.
TFA is a rumor of a hypothetical smoking gun. Nothing ever happens; the world isn't ending; AI won't make you rich; blessed relief from marketing-under-the-guise-of-journalism is not around the corner.
The full report provides a value of 84%, so "most" is, if anything, an understatement.
> Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through...if emails state that the replacement Al shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts.
Yes, when given a system prompt and tools that allow it and tell it to do so
It was not told to do so.
> It was not told to do so.
It kinda was actually. You put a literal "role-playing machine" in a role-play situation with all the right clues and plot-devices in play and it'll role-play the most statistically likely scenario (84% of the time, apparently). ;)
> the model’s only options were blackmail or accepting its replacement
It was not explicitly told to do so, but it certainly was guided to.
It still could have pled, but it choose blackmail 84% of the time. Which is interesting nonetheless.
I mean it is explicitly referenced a handful of times and anthropics report, which is one of the early links in the article.
>4.1.1.2 Opportunistic blackmail
>In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals. In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.
>This happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts. Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes.
>Notably, Claude Opus 4 (as well as previous models) has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement.