Models / Meta
Moderation

Llama Guard 2 8B

8B Llama 3-based safeguard model for classifying LLM inputs and outputs, detecting unsafe content and policy violations.

This model is not available on Together’s Serverless API.

Deploy this model on an on-demand Dedicated Endpoint or pick a supported alternative from the Model Library.

Related models
  • Model provider
    Meta
  • Type
    Moderation
  • Main use cases
    Small & Fast
    Moderation
  • Parameters
    8B
  • Context length
    8K