"Restricted people won't be able to see when you're active on Instagram or when you've read their direct messages", said Mosseri.
The initiative comes after Instagram has come under fire for not doing enough to tackle online bullying.
In an announcement published Monday, Instagram said it will be encouraging positive interactions powered by artificial intelligence (AI), notifying a publisher that their words may be considered offensive.
Users who do not want to use the existing "limited comment" function that restricts commenting to a selected group of friends will now effectively be able to determine who posts on their supposedly public posts by controlling whose posts can be seen by anybody.
It does not stop people from making negative comments, but it gives them the opportunity to "undo" the message before it is posted. Even more ominously, Mosseri also revealed that the platform will shortly begin trialing a related AI feature that automatically hides comments it deems as offensive without the user being aware that their comment was hidden. Before the comment is posted, Instagram asks: "Are you sure you want to post this?' The user must then reflect on what they've written before deciding whether or not to go ahead with sharing it - the person on the receiving end will only be notified if the original person (AKA bully) decides to confirm their comment".
There is also an "undo" button next to the comment, so users can quickly remove it.
"We've heard from young people in our community that they're reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life", Instagram said.
He wrote: "We know bullying is a challenge many face, particularly young people".