Last year we introduced AutoMod to take some of the load off of manually moderating a server. To date, AutoMod has automatically blocked more than 45 million unwanted messages from servers before they even had a chance to be posted based on server rules.
Now we’re taking AutoMod to the next level, harnessing the power of large language models. Moderators can leverage Automod AI, which will use OpenAI technology to find and alert the moderator whenever server rules may have been broken, keeping in “mind” the context of a conversation. The AutoMod AI experiment begins in a limited number of servers today.