This text by Lionel Dricot made my morning, amidst all the current hype of how incredibly great ChatGPT and the latest developments in AI are:
While they are exciting because they are new, those creations are basically random statistical noise tailored to be liked. […] Algorithms are able to create out of nowhere this very engaging content. That’s exactly why you are finding the results fascinating. Those are pictures and text that have the maximal probability of fascinating us. They are designed that way.
The algorithms are already feeding themselves on their own data. And, as any graduate student will tell you, training on your own results is usually a bad idea. You end sooner or later with pure overfitted inbred garbage. Eating your own shit is never healthy in the long run.
The latter quote is the thing that concerns me even more than the first. Yes, these are opaque algorithms introducing all kind of bias and tailored to be “liked”. But the real problem is that they rely on what they are fed, and that is even more difficult to be held accountable than the algorithms themselves.