Meta (the parent company of Facebook, Instagram and WhatsApp) announced that it will begin using public and user-shared content from European users to train its Artificial Intelligence (AI) systems from June 26, 2025. This change has consequences for both individuals and organizations, especially in the areas of privacy, data security and reputational risks.

Meta will use, among other things, texts, images and videos that users have shared publicly or that they have given permission for themselves. This includes older content that was available before this date, provided that content was publicly visible at any time.
For WhatsApp, messages are explicitly excluded from AI training. However, profile photos, statuses, or other public data can be included if they are set as “public”.
The increase in publicly available data increases the risk of advanced forms of cybercrime, including:
AI makes it easier to make connections between separate data and build realistic profiles from it — a technique that cybercriminals like to use.
Recommended measures:
Although prevention is essential, the risk of digital incidents remains. Good cyber insurance or fraud insurance then offers an important safety net. These cover, among others:
We are happy to advise you on appropriate insurance and additional security measures that are in line with your digital risks.