OpenAI and Greatness Cannot Be Planned

Around seven or eight years ago, I watched a video of a talk by Kenneth Stanley about why greatness cannot be planned. It has been hugely impactful on me, and I took away a few key points from it.

The talk used machine learning examples to show that pursuing a goal directly doesn't work well in many cases. Specifically, I learned two things. First, if a goal is within reach and the steps to get there are deterministic, then pursuing a goal directly is fine. Second, if the path to the goal is not clear, the only thing that makes sense to do is to pursue the most interesting things because they are the ones that reveal more of the map.

I was thinking of this in the context of OpenAI because I heard a mention that many years ago Alec Radford wanted to play around with a few methods, they weren't obviously going to be important, but they were interesting - and OpenAI gave him the resources to do so. That led to the first GPT paper, which I think is a perfect example of this philosophy. The path at that point was not obvious, so the only correct thing to do was to try lots of interesting things in a small way, and then as soon as they show promise, double down on them and then double down on them again.

You can also see this with the way OpenAI initially had so many different things they were doing. They were getting agents to play video games, working on robotics, and more. When the path to something isn't clear, you need to pursue lots of things and pursue the most interesting things to reveal the map. I think the same thing is true in biotech research. A lot of the most interesting biotech discoveries came not from trying to work on that discovery directly, but from trying to solve something that someone felt was interesting and could potentially be worthwhile, even if they didn't quite know why.

The takeaway is twofold. If you have a really big goal and you can't see a clear path to it, then it's plausible that the quickest path to that big goal is not to try and go directly towards it. Instead, do whatever thing you can see a direct path to that is most interesting and seems most promising, even if you don't know how it would directly relate to that other thing. It should feel like it would lead to interesting, unexpected things and uncover more of the solution space map.

Secondly, if you are working on something that doesn't feel like it has a direct path to anything important but it does feel interesting to you, know that you are on a good path. That's how a lot of important things happen - by following what feels interesting.