Scored across 23 sub-signals in 6 dimensionsScoring engine v1 (beta) — actively being expandedPhase 1: Core sub-signal architecture (live)Phase 2: Permission scope & expanded collection (in progress)
Trust AssessmentAI Assessment
LangSmith is an MIT-licensed observability and evaluation platform published by LangChain for monitoring and debugging LLM applications. The service shows strong operational reliability with perfect uptime and minimal security exposure (1 CVE), supported by 16.8M weekly downloads indicating substantial production adoption. While transparency and maintenance scores are solid, teams should verify the SDK's maturity for critical production workloads given its relatively recent emergence in the LLM tooling ecosystem.
Generated by Fabric AI · Mar 4, 2026 at 10:51 PM
Service Health (30d)
98.00%
p50: 189ms · p99: 1883ms
Avg Latency
216ms
averaged across 30d health checks
Weekly Downloads
—
no package registry data
Incidents & Alertslast 90 days
Mar 4Trust score increased by 1.214.45
Mar 1Trust score decreased by 1.233.24
Feb 26LangSmith added to Trust Index4.32
Showing 3 of 3 events
Score History90 snapshots
5.003.752.501.250.00
Feb 26Mar 5
Supply Chain & Dependenciestrust chain
◈
claude-agent-sdk
pypi · >=0.1.0; python_version >= "3.10" and extra == "claude-agent-sdk"