

Victims’ consent is paramount. I thought I made it clear enough when I gave the example of police investigations asking for that, but apparently not.
In any case, this is exactly why I’d like to see more research being done on the topic. First things first, we need to know if it even works. For now it’s a wild guess that it probably should.

Why though? If it does reduce consumption of real CSAM and/or real life child abuse (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.
Or is it simply a matter of spite?
Pedophiles don’t choose to be attracted to children, and many have trouble keeping everything at bay. Traditionally, those of them looking for the least harmful release went for real CSAM, but it’s obviously extremely harmful in its own right - just a bit less so than going out and raping someone. Now that AI materials appear, they may offer the safest of the highly graphical outlets we know, with least child harm done. Without them, many pedophiles will revert to traditional CSAM, increasing the amount of victims to cover for the demand.
As with many other things, the best we can hope for here is harm reduction. Hardline policies do not seem to be efficient enough, as people continuously find ways to propagate the CSAM and pedophiles continuously find ways to access it and leave no trace. So, we need to think of ways to give them something which will make them choose AI over real materials. This means making AI better, more realistic, and at the same time more diverse. Not for their enjoyment, but to make them switch for something better and safer than what they currently use.
I know it’s a very uncomfortable kind of discussion, but we don’t have the magic pill to eliminate it all, and so must act reasonably to prevent what we can prevent.