Discover and compare 5 AI tools and services
Showing 5 of 5 AI solutions in AI Safety
1 filter active
Anthropic's training method for creating helpful, harmless, and honest AI systems through self-supervision.
Non-profit research organization focused on aligning future machine learning systems with human interests.
NVIDIA's open-source toolkit for adding programmable guardrails to LLM-based conversational systems.
OpenAI's content moderation system for detecting potentially harmful content in text.
Key Features:
Google's API for analyzing text toxicity and providing scores for various attributes of potentially harmful content.