Hive Moderation
Hive’s complete solution to protect your platform.
Mobilizing the world's largest distributed workforce of humans labeling data, we are raising the bar for automated content moderation. We offer both best-in-class
models as well as manual moderation, allowing us to provide solutions at scale and
outperform contract workforces of business process outsourcers (BPOs).
In addition to our best-in-class models, our distributed workforce can meet a variety of manual moderation needs. Whether you want to manually moderate user content or annotate training data at scale, our distributed system and consensus policy provide a level of precision that our competitors cannot.
Learn more
WebPurify
World-Class Image Moderation & More. Discover a faster, more efficient way to keep user-generated content clean. Given the complexities associated with nuance and context, our human moderators are trained to flag violations that fall into the gray areas and make final image decisions that align with your brand standards. Our Automated Intelligent Moderation (AIM) API service offers 24/7 protection from the risks associated with having user-generated content on your brand channels—detecting and removing unwanted images in real-time. This one-of-a-kind solution delivers the best of automated and live moderation through a single, easy-to-use API. AI technology detects images with a high probability of containing undesirable content, limiting the volume that requires human review. The remaining images are then queued up for moderation by experts who are trained to flag any additional violations.
Learn more
Two Hat
Custom neural network trained to triage reported content. For years, social networks have relied on users to report abuse, hate speech, and other types of online harms. Reports are sent to moderation teams who review each abuse report individually. Many platforms receive thousands of reports daily, most of which can be closed without taking action. Meanwhile, reports containing time-sensitive content — suicide threats, violence, terrorism, and child abuse — risk going unseen or not being reviewed until it’s too late. There are legal implications as well. The German law known as NetzDG says that platforms must remove reported hate speech and illegal content within 24 hours — or face fines of up to 50 million euros. Similar laws concerning reported content are being introduced in France, Australia, the UK, and across the globe. With Two Hat’s reported content product Predictive Moderation, platforms can train a custom AI model on their moderation team’s consistent decisions.
Learn more
Community Sift
AI-powered chat filter and content moderation platform for social networks. A powerful, scalable, and automated content moderation platform for social products. Simply connect to our API to moderate text, usernames, images, and videos in real time. Two Hat’s AI-powered content moderation platform classifies, filters, and escalates more than 102 billion human interactions, including messages, usernames, images, and videos a month, all in real-time. With an emphasis on surfacing online harms including cyberbullying, abuse, hate speech, violent threats, and child exploitation, we enable clients across a variety of social networks to foster safe and healthy user experiences. Maintain autonomy over your content moderation practices with the agency to adjust your settings, create flexible workflows, make real-time updates, and have full transparency into our solution. During a real-world crisis on your platform where real lives matter every second counts.
Learn more