While WorkBoard and many other organizations have used AI, machine learning, and natural language processing to improve stakeholder experiences, the advent of generative AI presents tremendous new opportunities and creates new concerns. While generative AI is still in the early stages of adoption, it is already making a real difference in our ability to gather, synthesize, and author information more quickly – activities at the center of knowledge work.
To drive the most beneficial outcomes for our customers, our use of AI must be trustworthy and ethical, and as regulations emerge, compliant.
The pillars of our commitment to responsible AI are:
We accelerate teams, not replace them
WorkBoard’s use of AI generates drafts of OKRs, action plans, scorecards, and other strategy execution artifacts that enable the user to make their own decisions faster. The Co-Author and AI are collaborators with the user – visibly and transparently – rather than obviating the user’s ideas, input, or insight. Users can choose to accept, modify or reject the draft or briefs that our Co-Author generates. This "human in the loop” use of AI honors the agency of our users while providing benefits that neither AI nor a human could achieve alone.
We are transparent
We make clear documentation on our architecture and software available to our customers, including how we use AI. This includes explaining how our learning models are built, trained, and tested. We value inclusion and fairness, and our governance process monitors how we use AI for unintended consequences. As we continue to both innovate and learn, we will maintain our deep commitment and controls for explainability.
Your data is private by design
Data privacy and confidentiality are the foundations of trust for any platform, and our privacy and information security policies apply to our use of AI technologies. We grant customers control over their data's usage in our AI solutions. Privacy-by-design is a first principle of all of our development practices, and our use of AI and learning models is no exception. WorkBoard enables companies to harness the intelligence of their own strategy execution data managed in our platform while benefiting from the power of a domain-specific large language model to generate strong suggestions faster – without worrying that their data ever lands in the public domain, on ChatGPT, or in a competitors’ hands.
WorkBoard leverages AI Service to provide intelligent suggestions and prompts for generating OKRs based on the data available in the platform.
At WorkBoard, we take the security and privacy of your data seriously. We follow industry best practices and employ robust security measures to protect your OKR data. All communication between WorkBoard and AI is encrypted, and access to your data is strictly controlled and limited to authorized personnel.
The OKR data is securely transmitted to AI using encrypted channels and industry-standard security protocols. We adhere to secure data transmission practices to ensure the confidentiality and integrity of your OKR data.
Yes, AI maintains a robust security infrastructure to prevent unauthorized access or misuse of customer data. They have implemented strict access controls, monitoring systems, and auditing mechanisms to ensure the protection of your OKR data.
You retain full ownership and control over your OKR data. AI does not use or retain your data beyond the scope of the OKR generation process. WorkBoard does not share your data with any third parties without your explicit consent.
Your data is encrypted in transit. Any sensitive information is anonymized or tokenized before being sent to AI, ensuring the privacy and confidentiality of your data.
Yes, the WorkBoard Co-Author is designed to provide high-quality suggestions while maintaining the confidentiality of your OKR data. The model focuses on generating relevant and useful prompts without retaining or disclosing any sensitive information.
Azure Open AI and Anthropic Claude does not retain your OKR data beyond the scope of the OKR generation process. Once the suggestions are generated, the data is discarded, ensuring that your data remains secure and private.
Both companies maintain various security certifications and compliance standards, including ISO 27001, SOC 2 Type II, HIPAA, and GDPR. These certifications ensure that their infrastructure and practices meet the highest security and privacy standards.
For more information on Azure OpenAI, visit Azure Open AI FAQ.
For more information on Anthropic Claude, visit Anthropic's Trust Page.