https://www.debuggingbook.org/html/StatisticalDebugger.html
A related method. Not quite as straightforward as running with and without the failing test and comparing coverage reports. This technique goes through and collects many test runs and identifies lines only associated with or most often associated with failing runs.
I had no idea this had (or was worthy of) a name.
That's the whole point of coverage diffs.
The tough ones are the tests that sometimes fail and give you the same coverage results - the problem is not in the code under test! And the lazy/common things to do are re-run the test or add a sleep to make things "work."
what about just doing a git diff ? that would see the method was not called before?
If it were a longstanding bug versus one just created for this exercise a git diff may not help much. Imagine you've found some edge case in your code that just hadn't been properly exercised before. It could have been there for years, now how do you isolate the erroring section? This technique (or the one I mentioned in my other comment which is very similar but uses more data) can help isolate the problem section.
git diffs can definitely help with newer bugs or if you can show that it's a regression (didn't error before commit 1234abcd but did after).