Add Anthropic-compatible token counting endpoint using hybrid approach: - Local estimation with gpt-tokenizer for text content (~95% accuracy) - API-based counting for complex content (images, documents) - Automatic fallback to local estimation on API errors This resolves warnings in LiteLLM and other clients that rely on pre-request token counting.
2.5 KiB
2.5 KiB