By Thierry Nicault, Area Vice President – Middle East and North Africa, Salesforce
Like all of our innovations, we are embedding ethical guardrails and guidance across our products to help customers innovate responsibly — and catch potential problems before they happen.
Given the tremendous opportunities and challenges emerging in this space, we’re building on our Trusted AI Principles with a new set of guidelines focused on the responsible development and implementation of generative AI.
We are still in the early days of this transformative technology, and these guidelines are very much a work in progress — but we’re committed to learning and iterating in partnership with others to find solutions.
Below are five guidelines we’re using to guide the development of trusted generative AI, here at Salesforce and beyond.
Accuracy: We need to deliver verifiable results that balance accuracy, precision, and recall in the models by enabling customers to train models on their own data. We should communicate when there is uncertainty about the veracity of the AI’s response and enable users to validate these responses. This can be done by citing sources, explainability of why the AI gave the responses it did (e.g., chain-of-thought prompts), highlighting areas to double-check (e.g., statistics, recommendations, dates), and creating guardrails that prevent some tasks from being fully automated (e.g., launch code into a production environment without a human review).
Safety: As with all of our AI models, we should make every effort to mitigate bias, toxicity, and harmful output by conducting bias, explainability, and robustness assessments, and red teaming. We must also protect the privacy of any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm (e.g., force publishing code to a sandbox rather than automatically pushing to production).
Honesty: When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (e.g., open-source, user-provided). We must also be transparent that an AI has created content when it is autonomously delivered (e.g., chatbot response to a consumer, use of watermarks).
Empowerment: There are some cases where it is best to fully automate processes but there are other cases where AI should play a supporting role to the human — or where human judgment is required. We need to identify the appropriate balance to “supercharge” human capabilities and make these solutions accessible to all (e.g., generate ALT text to accompany images).
Sustainability: As we strive to create more accurate models, we should develop right-sized models where possible to reduce our carbon footprint. When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, more sparsely trained models.
Learn more about Trusted AI at Salesforce, including the tools we deliver to our employees, customers, communities, and partners for developing and using AI responsibly, accurately, and ethically.