The fallout of the consequences of all this use of AI is going to be massive.
The distribution of mistakes that humans make is not probabilistically uniform but rather weighed towards smaller mistakes, because people are rational so they pay more attention to possible errors with big consequences than they do to those with smaller consequences and generally put much more effort into avoiding the former.
Things like LLMs pretty much have a uniform distribution of errors, with just as much big ones with big consequences as small ones since they’re text predictors which don’t actually reason their responses hence don’t consider anything which includes not checking for errors, which is why some LLM hallucinations are so obviously stupid for thinking beings (and others are obviously very dangerous, such as the “glue on pizza” one).
I suspect the accumulation of the consequences of LLMs making all sorts of “this can/will have big nasty consequences” mistakes in all manner of areas over a couple of years is going to be tons of AI adopting companies collapsing left and right due to problems with customers, products, services, employees and even legal problems (I mean, there are people using AIs in Accounting, which is just asking for bit fat fines from the IRS when the AI makes one of those “big mistake that would be obvious for a human”) and this is before we even go into how much the AI bubble is propping the stockmarket in the US.
The fallout of the consequences of all this use of AI is going to be massive.
The distribution of mistakes that humans make is not probabilistically uniform but rather weighed towards smaller mistakes, because people are rational so they pay more attention to possible errors with big consequences than they do to those with smaller consequences and generally put much more effort into avoiding the former.
Things like LLMs pretty much have a uniform distribution of errors, with just as much big ones with big consequences as small ones since they’re text predictors which don’t actually reason their responses hence don’t consider anything which includes not checking for errors, which is why some LLM hallucinations are so obviously stupid for thinking beings (and others are obviously very dangerous, such as the “glue on pizza” one).
I suspect the accumulation of the consequences of LLMs making all sorts of “this can/will have big nasty consequences” mistakes in all manner of areas over a couple of years is going to be tons of AI adopting companies collapsing left and right due to problems with customers, products, services, employees and even legal problems (I mean, there are people using AIs in Accounting, which is just asking for bit fat fines from the IRS when the AI makes one of those “big mistake that would be obvious for a human”) and this is before we even go into how much the AI bubble is propping the stockmarket in the US.