Most of the users were blocked for spam message abuses. The Facebook-owned firm has put a limit on mass-forward messaging during a bid to counter misinformation.
India implemented new rules in May to manage social media companies, forcing them to disclose monthly their efforts to police their platforms.
“We maintain advanced capabilities to spot these accounts sending a high or abnormal rate of messages and banned 2m accounts in India alone from May 15 to June 15 attempting this type of abuse,” WhatsApp said in its report released late on Thursday.
The company said its “top focus” remains on preventing the spread of harmful and unwanted messages.
WhatsApp has quite 400m users in India, one among its top markets, but has often found itself facing criticism over the spread of misinformation.
Dozens of individuals were lynched in India in 2018 following rumors spread on WhatsApp about gangs stealing children.
The incidents prompted the messaging app to introduce a limit on bulk forward messaging in India. WhatsApp and a few Indian media firms have sought to challenge the new social media rules in court.
Critics say the govt is seeking to crush dissent but the govt says it’s attempting to form social media safer.
Under the principles, social media platforms need to share details of the “first originator” of posts deemed to undermine India’s sovereignty, state security, or public order.
WhatsApp says the principles violate India’s privacy laws.