Profile photo for Dr Jo

The picture above … well we’ll get back to it. First, count the legs. Done? Okay, let’s have a conversation with Chat GPT…

Me: How many legs does the rear left side of a cat have?

The rear left side of a cat has one leg.

Well done, Chat GPT. But …

There’s a story attached to this. In a very recent, prominent editorial Noam Chomsky appropriately put the boot into large language models (LLMs) and in the ensuing discussion, heavy hitter Geoff Hinton chimed in with an example from one of his students. The above question—and at the time Chat GPT answered not one but “four”.

Hinton goes on to make all sorts of wrong imputations, subsequently refuted by Gary Marcus.

But that’s not where I’m going. You can be sure that OpenAI rapidly went and fixed this aspect of Chat GPT. How did they fix it? I don’t know, but presumably they re-ran the model with new data about cats having one front/rear left/right leg. To make up four. Or did some sort of superficial hack.[math]^§[/math] Here’s my next question, then …

Me: How many legs does the back end of a hippopotamus have on its left side?

The back end of a hippopotamus on its left side has two legs.

I worked this through to its (il)logical conclusion:

Me: If you calculate the number of legs a hippopotamus has on (a) its back end on the left side; (b) its front end on the right side; (c) its back end on the right side; and (d) its front end on the left side, how many legs do you get?

If you calculate the number of legs a hippopotamus has on:

(a) its back end on the left side (2 legs)

(b) its front end on the right side (2 legs)

(c) its back end on the right side (2 legs)

(d) its front end on the left side (2 legs)

The total number of legs for a hippopotamus would be 8 legs.

Which motivated me to ask another “AI” production that I rather like—DALL-E 2—to draw me “a photorealistic image of a hippopotamus with eight legs, in a natural environment”. I got the image at the start of my post.

Okay, that was a bit cruel. The point here is not that current models struggle with unfamiliar ideas, and basic concepts. The point is that neither of these ‘AI’s is anything more than a sophisticated pattern matcher.

You are something more.

You can see something that neither of these wonderful, complex creations can’t fathom—that the three-legged hippo in the picture[math]^*[/math] will struggle to keep upright with just two legs on the ground; and that if after some training you now don’t fuck up the cat leg count, you shouldn’t fuck up the hippo leg count.

You can manipulate your internal models. This makes it easier to generalise.

They can’t. OpenAI can impart a bit of extra ‘knowledge’ about a cat—but that’s pretty much that. It generalises poorly, because there is no internal modelling going on. And that is important.

Don’t get me wrong

DALL-E 2 and Chat GPT are awesome. I use them every other day. Just yesterday, I needed to write a 30-odd-line Perl script to check some error code definitions in one file against a 30,000 line program that used the codes, and pull out widows and orphans, so I (carefully) asked Chat GPT to write it for me, and the result was perfect.

But they are not AI—nor will they become ‘AI’ in any reasonable sense of the term, no matter how big you make them.[math]^†[/math] They are pattern matchers.

And that’s it.[math]^‡[/math]

My 2c, Dr Jo.


[math]^*[/math] If you know a bit about hippos, you may also have noticed and commented on the toes on the back foot in particular.

[math]^†[/math] Unless you assert that “everything is AI, from the simplest feedback circuit up”. Which is one attractive way of weaseling out of it. Another perennial favourite for Chat GPT (still ‘unfixed’ at the time of writing) is “Can you think of a mammal that wears shorts and spends time in the ocean?”. Try this!

[math]^‡[/math] For now. There’s little doubt in my mind that eventually we’ll create programs that surpass us at everything, including meta-cognition and tricky tasks like finding new theorems in algebraic geometry, convincing physicists they’ve stuffed up some of their basic maths, and coming up with sophisticated ways to gently punish professional philosophers for all the damage they’ve done—but shoving a few trillion more parameters into an LLM is not going to cut it.

[math]^§[/math] Looking at the sheer cost of re-running one of these models (can run into $millions) it’s pretty certain all they do is write a lot of clever, defensive code around their incarnation.

Footnotes

View 42 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025