

5·
2 days agoIndeed, but in my country the psychological support is even mandatory. Furthermore, I know there have been pilots with using ML to go through the videos. When the system detects explicit material, an officer has to confirm it. But it prevents them going through it all day every day for each video. I think Microsoft has also been working on a database with hashes that LEO provides to automatically detect materials that have already been identified. All in all, a gruesome job, but fortunately technique is alleviating the harshest activities bit by bit.
With the current (digital) regulatory landscape (e.g. GDPR, DMA, DSA and the AI Act that is entering into force in multiple stages right now), the EU has proven to be quite resolute and decisive with their fines and measures. This is all partof their digital de ade stategy and more legislation is coming to tame these tech behemoths. Yes, it isn’t always fast or efficiënt, but the EU seems to be only world power that actually has the balls to do something.
This reminds me of EDPB Guidelines that have been published last year. In it, the EDPB had said that in extreme cases, AI models that have been trained on unlawfully obtained data such as personal data without a ground of proxessing etc., nationale authorites may compel the violating developer to delete the whole model. I do not see it happening soon or often, but it is a very good sign that the European authority mentions this as a possible action and outcome in an official document.