AI short term issues
“Garbage in, garbage out”. ML (machine learning) models are “trained” on data, but if the data itself is bad, then the ML models will be as well.
Example: An AI model is trained to hire the best candidates for a company. It is trained by past hiring data, however the past hirers who made the data were biased against women, passing them up even when they have great resumes. The AI will assume there is some good reason for this, and be biased against women itself.
This isn’t even a contrived example, it’s happened before (there’s also been AI that determines whether someone should get parole… and the AI has ended up making that decision based on race). Here’s an interesting game that illustrates the concept