AI + ML

More Compute Power Doesn’t Produce Artificial General Intelligence

Even the most powerful computers can’t answer ‘why?’

Great fan of Naval. It's recommended to listen at least three episodes of his podcast (each of them are between one and two minutes long) to understand where he is coming from.

He brings up GPT-3 as an example, highlighting the fact that in the end a human will select the best items generated by a machine. Same stands for visual design: the designer picks the best looking assets, layouts and images from a pool of generated pieces.

Start listening from "Humans are Exceptional" – 22 October 2021, then continue with "More Compute Power Doesn’t Produce AGI" and finish with "It’s Mind Blowing That Our Minds Can’t Be Blown" (this episode itself brings up another fantastic thought)

All episodes are here:  podcasts.apple.com

"People talk a lot about GPT-3, the text matching engine that OpenAI put out, which is a very impressive piece of software. They say, “Hey, I can use GPT-3 to generate great tweets.” That’s because, first, as a human you’re selecting the good tweets out of all the garbage that it generates. Second, it’s using some combination of plagiarism and synonym matching and so on to come up with plausible sounding stuff.

The easiest way to see that what it’s generating doesn’t actually make any sense is to ask it a follow-up question. Take a GPT-3 generated output and ask it, “Why is that the case?” Or make a prediction based on that and watch it completely fall apart because there’s no underlying explanation.

It’s parroting. It’s brilliant Bayesian reasoning. It’s extrapolating from what it already sees out there generated by humans on the web, but it doesn’t have an underlying model of reality that can explain the seen in terms of the unseen. And I think that’s critical.

That is what humans do uniquely that no other creature, no other computer, no other intelligence—biological or artificial—that we have ever encountered does."