Hey Skool Team,
I wanted to share some thoughts on how we can make the platform even better when it comes to managing spam, bots, and fake accounts. These issues can really mess with the vibe of a community, and I think there are a few ways to tackle them without making things too complicated.
First, it would be great to separate suspected bots and spammers from genuine churned members. Right now, when fake accounts end up in the churned list, they mess with the metrics that reflect actual community activity. If there were a dedicated category for these accounts, it’d be easier to track and review them without impacting churn rates.
Next, chat access could be limited to members who’ve put in some real effort, like reaching Level 6 or 7 and staying active for at least 30 days. Right now, lower levels like 2 or 3 are pretty easy to hit, and spammers can use that to their advantage. Making the requirements tougher would mean only genuinely engaged members can join the conversation.
I also think smarter detection tools could make a big difference. AI could be used to flag patterns, like rapid-fire commenting . Combined with something as simple as a CAPTCHA during account creation, it’d be a strong first line of defense against fake accounts.
Another idea is to reward quality engagement. Instead of letting comments alone drive leveling up, actions like completing courses or contributing meaningful posts could count more. This way, genuine participation is encouraged, and bots don’t have an easy path to level up.
Lastly, it’d be helpful if communities could share blocklists of known spammers. If one community flags a bad actor, that info could help others proactively avoid the same issues.
These tweaks could make Skool an even safer, more engaging space for all of us. Let me know what you think!