A very smart and well informed colleague recently shared a thought that disturbed me. I’m writing it here mostly to get it out of my head, and also in the hopes that the eminently quotable Admiral Rickover will once again be proved right: “Weaknesses overlooked in oral discussion become painfully obvious on the written page.”
Here’s the observation: Machine learning and Artificial Intelligence are become a game of kings. The field is now the competitive arena for the likes of Microsoft, Google, Amazon, Facebook, and IBM. When companies of this scale compete, they do so with teams of thousands of people and spend (in aggregate) billions of dollars. The people on these teams are not a uniform sampling of their industry, they are the elite – high level professionals with the freedom to be choosy about their jobs.
The claim is that this presents an insurmountable barrier of entry to anyone who is not on one of those teams. Prosaically, when the King’s Hunt is afield, those of us without the resources of a king are well advised to stay out of the way.
In his words: “If you want to have an impact in AI or ML, the only real choice is which of the billionaires you want to work for.” Further, if you want to use these technologies, the only real choice is which billionaire to buy from.
I find this to be depressing, but not necessarily flawed. It would be easy (and potentially even more accurate) to make the same argument about computational infrastructure in the age of public exascale clouds.
There’s also an insulting subtext to the argument: If you are working with or on ML and AI and are not working for or with a billionaire, your work is de-facto pointless. Further, all the most talented people are flocking to join the King’s teams – maybe it’s just that you didn’t make the cut?
Did I mention that this particular colleague works part-time for Google? It reminds me of the joke about Crossfit: “How do you tell that somebody does crossfit? Oh don’t worry, they’ll tell you.”
With all that said, I don’t buy it. I fall back on Margaret Mead’s famous quote: “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”
I harbor a deep-seated optimism about people. Everywhere I go, individuals and small teams absolutely sparkle with creativity and intelligence. These people are not the ‘B’ players, sad that they couldn’t make the cut to join the King’s hunting team. For my entire career, brilliant, hardworking innovators and entrepreneurs have been disrupting established power structures and upending entire markets. They don’t do this by fielding a second tier team in the old game – instead they invent a new game and change the world.
So while the point may be valid for established commodities, it is a bridge too far (and quite the leap of ego) to write off the combined innovative energy of the whole rest of the world.
I would welcome conversation on this. It feels important.
The “siren servers” in Jaron Lanier’s “Who owns the Future” published in 2013
Thanks for the heads up. I’ve ordered the book and I’ll give it a read. Sounds like this is well trodden territory.
Obviously there are plenty of influential start-ups and individual machine-learning researchers out there that have good ideas, but the big players will certainly try to buy out the most influential of those agents in order to keep them harnessed firmly to capitalism (think of Keras). This isn’t incompatible with open-source either (think of Word2Vec and TensorFlow); Google’s strategy seems to be to make tools widely available, let everybody play with them, then acquire the best of what results. The models, on the other hand, are broadly proprietary and closely guarded (Facebook, Amazon), as are the datasets used to train them.
So if one’s goal is to have one’s own ideas, that is still imminently achievable. But this funding model will ensure that the lion’s share of the direction is set by the business practices of those few small firms, mainly based on how many dollars the results will unlock. It’s troubling if one questions the ethics of those business models, because it’s almost certain that highly problematic uses will develop (like Twitter abuse) which actually aid the bottom line, and hence will continue despite being terrible ideas.