On AI
I've always been uncomfortable about the opacity around the datasets used in AI training and also the ethical standards in relation to copyright held by AI tool creators in general.
And how would they continue to improve and train the AI over time? As the information landscape fills up with a combination of legitimate information and low-quality AI-generated content, are they going to continue to scrape it all up indiscriminately and work with increasingly tainted datasets in a grotesque, ouroboros-like fashion?
Glad I'm not the only person thinking about this - see: AI is going to eat itself | The Register