Will Human Beings Make Artificial Intelligence Stupid?
I figure that if you’re not at all worried about Artificial Intelligence, maybe you deserve to lose your job to it. And if that cynical intro is true, then at least my own employment is secure: I’m petrified. Various brilliant thinkers, including Stephen Hawking and Elon Musk, are warning that the machines could take over. If they have reached that conclusion, I figure that we more feeble minded ones should at least lose lots of sleep worrying.
Strangely though, one of my big worries is not that the machines will rule us, but rather that we humans will maintain enough ‘oversight’ to ensure that AI actually makes our race collectively dumber. That is, that we will require that the robots utilize ‘consensual and committee vetted’ analytical approaches, non-controversial politically correct conclusions, and require that they deliver consistent – meaning: lowest cost - service, and only then give them the power. (I doubt I’m the only one who thinks personal service is becoming a fond memory as the algorithms multiply.)
I had dinner recently with an extremely smart surgeon who works at a prominent Toronto hospital. He was lamenting how new standardized procedures and protocols were dictating his surgical strategies, eliminating the opportunity for both original thinking and serious debate with the other medical experts. He said that the only way he keeps learning, and medical science moves forward, is through these fierce arguments. That was when the conversation veered toward diagnostic A.I. We joked about whether in the future we would be telling our algorithm that we want a second opinion and a vigorous ‘robot debate’ before our child’s life altering medical strategy was chosen. We were all very witty and wine infused, but this question came up: Will the robots ever disagree with one another? What if they do? But worse: What if they don’t? I just wonder whether human bureaucratic groupthink will be personified in the programming of all future intelligence? How would a committee of robots behave?
I have mentioned before how EllisDon was recently tempted with software that purportedly would analyse job applicants and decide without our participation whom we should hire and from whom to flee. We had this huge debate, and finally decided to try the predictive algorithm out on people we already knew, including (unfortunately for the vendor) our CEO - whom EllisDon was unequivocally advised to never hire. But it was a funny thing: The algorithm actually seemed to analyse things pretty accurately: It ‘predicted’ that I wouldn’t work well in a team, didn’t respond well to authority and had no idea around the future generally. Check. So the software had it generally right, it was the humans who instructed the algorithm that such a person is unworthy of employment, sight unseen. I guess they thought it best to turn every company into some kind of conformist, team oriented Pablum. Even six years ago, the ‘AI’ (taking a big liberty here, admittedly) worked, the human oversight perhaps not so much.
Last example. A good friend of mine is a stock broker. Perhaps the two of us aren’t Warren Buffet squared, but we’ve done OK. He now reports that the big brokerage houses are cutting back on people like him. They want all their clients to buy similar algorithm based ETF’s (mutual funds), which will manage themselves according to preset strategies. This will certainly keep overhead costs low in this competitive age of on-line investing. But what will happen when everyone has the identical investment strategy, however it’s devised? Maybe retail investors like me don’t matter much in the overall scheme of the markets, I get that. But one could easily see the pension funds and other large capital pools all adopting basically the same A.I. driven selection analytics (why be different?) – and then suddenly the entire market is using exactly the identical logic. If that happens, wouldn’t such a globally uniform approach pervert the true market value of every company’s shares (and maybe offer up great opportunities to some neanderthal non-algorithm types)?
OK, I’m oversimplifying. But every ‘tech’ expert I talk to has zero idea where all this A.I. and neural net development will end up. Every one of them is tantalized by the possibilities and nervous about the risks. Maybe the machines, armed with all the data as well as superior deductive capabilities, will rule. Maybe they will cure all the evils bedeviling us and take the human race to new heights. But maybe we should also worry that, if we aren’t careful, they will simply be a mirror of our own tendency to stamp out rebellious thought, sacrilegious opinion and pioneering entrepreneurialism in favour of a gray hued drone like collective uniformity. Shoot me now.
Thanks for reading.