Skip to main content
Virtual Workshop
Tue, May 19, 6:00 PM - 7:00 PM (UTC)

Trusting AI Extraction - Confidence Scores, Governance and Human in the Loop Design

About this event

The number one barrier to AI adoption is not technology -- it is trust. When extracted data feeds into compliance workflows, financial systems, or regulated processes, "mostly accurate" is not good enough. This roundtable tackles the trust problem head-on, exploring Box Extract's new confidence scoring system, governance controls, and design patterns for human-in-the-loop validation. We will examine how enterprises are building extraction workflows where AI handles the high-confidence cases automatically while routing uncertain results to human reviewers. This session provides a practical framework for deploying AI extraction with high trust.

 

Topics we'll cover:

  • How Box Extract confidence scores work (aggregated LLM responses, decimal percentages, Low/Medium/High labels) and how to set thresholds for your use case
  • Design patterns for human-in-the-loop workflows and the Accuracy / Automation curve
  • How Box's native governance model (permissions, classification, audit trails) provides trust infrastructure
  • Practical governance policies for AI extraction: who can configure agents, which folders are in scope, how to audit extraction results
  • Regulatory considerations: HIPAA, GDPR, and industry-specific requirements for AI-processed content
Event details
Online event
Tue, May 19, 6:00 PM - 7:00 PM (UTC)