TeMPOraL 2 days ago

I can think of one - there's a bunch of newsletters/substacks I subscribe to that occasionally contain stuff I really care about somewhere in them, but are too long and dense to quickly skim them. An AI summary of most important points and topics would help me decide whether to invest time and read it in full, or archive it and forget.

That's all theoretical anyway. I'm not on GMail; I technically have this feature in the e-mail client on my phone, but I never even use it.

EDIT: The article mentions those summaries would apply to email threads too. I think that could be helpful. I've got tons of threads from some mailing lists that tend to grow large (10-20+ messages); catching up with them or revisiting old ones is tedious; an AI TL;DR of such threads actually sounds useful.

Also I'd love for e-mail clients to have a feature reminding normies when they've "forgotten" to address some questions from the sender.

1
weikju 2 days ago

> I can think of one - there's a bunch of newsletters/substacks I subscribe to that occasionally contain stuff I really care about somewhere in them, but are too long and dense to quickly skim them. An AI summary of most important points and topics would help me decide whether to invest time and read it in full, or archive it and forget.

At this point in time though an LLM summary would not be guaranteed to contain all the points or to focus on the stuff you care about, so it wouldn't be reliable anyway.

TeMPOraL 2 days ago

On the contrary, LLMs have been good enough at summaries for over a year now.

For this task, AI summary doesn't need to be 100% reliable anyway. I'm fine with it missing some points; it's better than me missing all the points. I'm fine with it hallucinating or reporting the opposite of some claim - the very mention of that claim tells me the article talks about it in some way.

Also, it's not like there exists an alternative. You cannot do this task any other way - and in particular, there's no economical way to get humans involved in it.

I don't think people really appreciate that last point. Everyone's quick to complain that LLMs don't give you signed, notarized guarantees of being more thorough and accurate than a team of scholars studying a text for a year; few stop to think that they're already better than average person and few could afford to hire a human to do this job even at worst possible quality.