Generative AI has become a hot topic, how is your organization think about this exciting new technology? See here for Box AI principles and we welcome your comments
Box is committed to the following principles when using and providing AI capabilities in Box products.
- Full customer control of AI usage. We’re committed to ensuring that our customers maintain control over their own data and processes. Customers may enable or disable the use of AI and decide whether AI should be applied to their content.
- No training models using customer content without explicit approval. Box won’t train AI models using customer content without the customer’s explicit authorization (for example, if a customer wants to create a customized AI model based on some of their content, they will need to explicitly agree to allowing this application of AI to their content).
- Explanations of AI output. We provide users, wherever reasonable, with a clear understanding of how our AI systems work and the rationale behind the AI output, to ensure context for the users.
- Strict adherence to permissions. AI systems adhere to the same strict controls and permissions policies that determines access to content across Box. Our architecture vigorously protects against data leakage and unauthorized access.
- Data security. We safeguard customer data by implementing robust security protocols, including encryption and data-security best practices, to maintain strict confidentiality and bolster security.
- Transparency. We are committed to being transparent about our AI practices, technology, vendors, and data usage.
- Protection of user and enterprise data. Box remains committed to complying with privacy, security, and/or applicable regulations by prioritizing the continued protection of both end-user and enterprise data.
- Trustworthy AI models. We are dedicated to using high-quality AI models from trusted vendors to support accuracy, reliability, and safety of AI solutions.