As someone who’s been paying attention to AI since the 1970s, I’ve noticed the same pattern over and over: People will say, “It takes real intelligence to do X (win at chess, say), so doing that successfully will mean we’ve got AI.” Then someone will do that, and people will look at how it’s done and say, “Well, but it’s just using Y (deep lookup tables and lots of fast board evaluations, say). That’s not really AI.”
For the first time (somewhat later than I expected), I just heard someone doing the same thing with large language models. “It’s just predicting the next word based on frequencies in its training data. That’s not really AI.”
Happens every time.