Reasoning is great... but what about memory?

Earlier this year we were talking about infinite contexts and long-term memory, and that just... never really happened.

We have Gemini with a 1 or 2 million context window, but it frequently mixes up the order of events in large stories. Its memory is just not as impressive as it sounds.

And gpt 4o still only has a 128k context that's great at remembering things at the beginning, but starts hallucinating badly when trying to remember things in the middle.

It seems like everyone just stopped working on this?

If there's been new research on this please inform me.