Social media platform X announced on Tuesday that it will suspend creators from its revenue-sharing programme for 90 days if they post AI-generated videos of armed conflicts without clearly disclosing that the content is artificially created.
India Seek ‘Special Performance’ Against Street-Smart England in T20 Semi-Final
The policy shift was outlined by the platform’s head of product, Nikita Bier, who said the move was aimed at protecting information integrity during the ongoing conflict involving the United States, Israel and Iran.
“During times of war, it is critical that people have access to authentic information on the ground,” Bier said, warning that advances in artificial intelligence have made it “trivial to create content that can mislead people.”
The company, owned by Elon Musk, said it would continue refining its policies and product features to ensure the platform remains trustworthy during critical global events.
The new disclosure requirement marks a notable shift for X, which has faced sustained criticism over its content moderation approach since Musk completed his $44 billion acquisition of Twitter in October 2022 and rebranded it as X. Since the takeover, the platform has rolled back several misinformation policies, with Musk arguing that stricter moderation amounted to censorship.
Under the updated rules, creators who repeatedly violate the AI disclosure requirement could face permanent removal from the Creator Revenue Sharing programme, which allows eligible users to earn a share of advertising revenue generated by their posts.
X said violations would be identified through its crowd-sourced fact-checking system, Community Notes, as well as metadata and other technical indicators embedded in AI-generated content.
The announcement reflects growing concerns among social media platforms about the potential misuse of generative AI tools to spread misinformation during periods of geopolitical tension.
