The post title is the title of a wonderful book by Cathy O’Neil. I can’t believe I haven’t blogged about the book before. She outlines how artificial intelligence in the form of algorithms often have unintended consequences, like sentencing of criminals based on biased perspectives. The algorithms are only as wise as the programmers, and often reflect biases of the programmers. Even today, there’s news that Apple’s trademarked credit card may have a sex-bias granting higher credit limits to males with lower credit scores than their female partners. Also, Marketplace ‘radio’ host, Kai Ryssdal, interviewed the author of “You Look Like a Thing and I Love You”, a new book about AI, wherein they demonstrated that despite lots of ‘learning’ about how people and cats appear, computer algorithms can’t accurately depict or create a picture of these living creatures.
They and I worry about driverless cars. If computers have trouble recognizing objects, legitimate hazards may go undetected while false positives (i.e. an non-hazard is identified as a hazard) generate evasive action, endangering other vehicles on the road. Likewise, if the algorithms depend on accurate signals, how well do sensors work during inclement weather? I’ve had the opportunity that radar sensors for making cruise control decisions became unavailable for use during blizzards.
They all work well during fair weather. And all algorithms can be objective if they’re built without biases, presumptions and really, really good data. As Taleb says in “Black Swan”, in order to predict the future we have to have perfect insight into the past in order to fully understand the cause and effect of the circumstances that ‘created’ our current situations. Will we ever be able to load all of the relevant data to make great decisions, not merely the significant data? Remember the scandals of the price with online shopping sites when pricing algorithms went ‘crazy’ showing prices that were insanely inflated over normal based on perceived demand and customers’ sense of affordability (i.e. how wealthy they were)?
I’m not saying algorithms are bad nor that AI won’t happen but it may be a while. Especially if a dearth of data isn’t the problem but how well it can be analyses to give great answers, not just good answers. I’ve seen presentations where AI is used to augment physicians’ own diagnostic powers through visual analysis of tumors and other physical symptoms. But note my emphasis of augmentation, not replacement. Even some other applications (like identifying guns) need human assistance (i.e. re-interpretation).
One of the forms of AI is through the use of artificial neural networks. This approach does not rely on programmer defined algorithms but on the same processes that the brain uses for training pattern recognition. Experimentation in this field was done back in the 80's and 90's but the hardware was costly to get a reasonable level of intelligence. They had found that when an artificial neural network was used for the expert system of proposing medical treatment for example, the neural network made the same percentage of errors that actual medical experts made but the specific errors were different ones. Artificial Neural Network hardware may make a comeback now that hardware is so much cheaper. I would think that new AI technology would incorporate multiple solution paradigms. Obviously, the world will need to evaluate whether or not the AI solution actually reduces risks. We will not be able to evaluate this with just a few examples but over many examples. Hiring people to perform monotonous tasks can lead to hypnotic like trances that prevent them from accomplishing the task they have been assigned. If we ever get to the place where humans can adequately "supervise" AI and provide feedback to it for continuous improvement, we might be on the right path.
ReplyDelete