Brazil joins Europe in blocking Meta from using public posts for AI training. The national data protection authority in Brazil has officially blocked Meta’s new privacy policy, citing the risk of serious and irreparable damage to the fundamental rights of users. As one of Meta’s largest markets, Brazil’s decision to block the policy update follows similar action in Europe, where plans to train AI systems using public posts on Facebook and Instagram were put on hold. The Irish Data Protection Commission intervened in Europe, requesting Meta to delay its AI training using public data to ensure compliance with privacy laws.
Meta argued that without access to local data, their AI products would provide a subpar user experience and fail to accurately understand regional languages, cultures, and trending topics. The company believes its approach in Europe is transparent and complies with laws and regulations, despite the resistance faced from data protection authorities. A spokesperson for Meta expressed disappointment over the decision in Brazil, stating that it hinders innovation and delays the benefits of AI for people in the country. Meta also highlighted that refusing to participate in the data collection is an option, but the agency believes there are obstacles to exercising the right to opt out.
The decision in Brazil reflects concerns about the potential misuse of personal data shared on Meta’s platforms, especially regarding children’s data. Additionally, the action against Meta is seen as a deterrent for other companies to be transparent in the use of data for AI development in the future. Compliance with the decision must be demonstrated by Meta within five working days, with daily fines imposed for failure to do so. The regulatory action in Brazil may set a precedent for other countries to examine how tech companies handle user data for AI training and ensure the protection of fundamental rights and privacy.