“Researchers Demonstrate an AI Language Generalization Breakthrough” on the Pure AI Web Site

I contributed some technical opinions to an article titled “Researchers Demonstrate an AI Language Generalization Breakthrough” on the Pure AI web site. See https://pureai.com/articles/2023/11/06/systematic-generalization.aspx.

The Pure AI article summarizes a research article titled “Human-Like Systematic Generalization Through a Meta-Learning Neural Network,” by Brendan Lake and Marco Baroni.

The topic is a bit tricky to explain. “Systematic generalization” in humans is the ability to more or less automatically use newly acquired words in new scenarios. For example, humans who learn the meaning of a new (to them) word such as “gaslighting” (psychological manipulation that instills self-doubt in a person), can use the new word in many contexts. AI researchers have debated for decades whether neural network systems can model human cognition if the systems cannot demonstrate systematic generalization.

The Pure AI article explains an interesting experiment in the source research paper. Take a look at this image:

In the image, there are four example primitives: tav = blue, yig = olive, muk = red and poz = green. There are 10 example functions. For example, “muk frabet yig” = red olive red. Based just on this information, a human should be able to determine the output for “tav frabet poz.”

If frabet is interpreted as a function, and muk and yig are interpreted as arguments, the example function can be thought of as frabet(muk, yig) = muk-color yig-color muk-color = red olive red. Put another way, the frabet function accepts two arguments and emits three colors, where the first argument color is in position 1 and 3, and the second argument color is in position 2. Therefore, “tav frabet poz” = tav poz tav = blue green blue.

Typically, humans are quite good at problems like this but AI systems cannot generalize as well as humans. But the research paper descibes how an AI system trained specifically for this type of problem can learn systematic generalization.

I am quoted in the Pure AI article. “The question of systematic compositional generalization has been a key point of debate for decades. The Lake and Baroni paper, while not absolutely definitive, strongly suggests that the current generation of large language model systems can be significantly improved in a relatively straightforward way.”



I always admired author Michael Crichton (1942-2008). He had the ability to apply systematic generalization to a wide range of plot ideas. He’s best known for the techno-thriller novel “Jurassic Park” (1990) but he wrote many different types of stories. Here are three of my favorite movie adaptations of his books.

Left: “The Great Train Robbery” (1978) features a team of con men with a complex plan to steal gold in 1850s Victorian England. Has a clever plot involving key duplication, coffins, dead cats. I give the movie a B grade.

Center: In “The Andromeda Strain” (1971), a satellite crashes near a small desert town and people start dying mysteriously. Is it an alien virus? Yes. A team of scientists race against time to discover how to neutralize the virus. I give this movie a B grade.

Right: “Rising Sun” (1993) is a murder mystery but also explores the vast cultural differences between the U.S. and Japan. A key element of the movie was the idea that video and photographic evidence can be manipulated — a very novel idea at the time. I give this movie a B grade.


This entry was posted in Machine Learning. Bookmark the permalink.

1 Response to “Researchers Demonstrate an AI Language Generalization Breakthrough” on the Pure AI Web Site

  1. Thorsten Kleppe's avatar Thorsten Kleppe says:

    The first question ok, but I dont get the second. I would swap the colors, 1 green and 3 red. Or is it not the color in this case and the right answer must just detect just that 1+3 pattern?

    Also this Grok questions is so… is that still logic?
    https://twitter.com/Rainmaker1973/status/1729213357209665668

    Maybe it is 6, because we dont know, but it is weird.

Comments are closed.