AI Principles
ZoomInfo’s AI principles, aligned with those of the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework, form the foundation of our AI governance framework and drive how we create, implement, and use AI at the company. They have been incorporated into our risk management framework and our ongoing AI assurance work.
Valid and Reliable
For an AI system to be trustworthy, it must be valid and reliable. Our use of AI is based on our best-in-class B2B data, which is continually evaluated by our internal engineering and research teams. Additionally, our AI models go through a robust evaluation process with human evaluators stress-testing it with clearly defined and realistic test-sets.
Safe
Safety is fundamental to creating and operating a sustainable AI system. Given the nature of our business, our processing activities remain relatively low-risk compared to many other uses of AI. Nonetheless, we continue to assess the risk of harm arising from our AI practices.
Secure and Resilient
All our AI systems remain in scope of our wider cybersecurity and resilience programs, meaning they are subject to a range of measures during their design, implementation, and ongoing operation.
Accountable and Transparent
This Trust Center seeks to ensure we remain transparent regarding ZoomInfo’s use of AI, but you should feel free to email us at responsibleai@zoominfo.com if you have further questions. Our accountability is supported by the incorporation of AI risk into our Enterprise Risk Management Program, our AI Risk Management Framework, and our ongoing Assurance Program that incorporates AI-related reviews.
Explainable and Interpretable
We seek to make our use of AI explainable and understandable to the user at the point of use, with our overall approach to AI development and governance addressed through this Trust Center.
Privacy-Enhanced
All our AI systems remain in scope of our wider privacy program, meaning they are subject to a range of measures during their design, implementation, and ongoing operation. Our in-house Privacy team is trained to manage the privacy rights of data subjects we come into contact with. We extend privacy rights to all data subjects, not only those individuals who are covered by existing laws.
Fair and Free from Bias
Our use of AI is guided by our Company Values and our Guiding Principles, which are outlined in our Sustainability Report. Where ethical issues arise, we are guided by our Data Ethics Policy and refer matters of judgment to our cross-disciplinary Data Advisory Board.
AI Governance at ZoomInfo
Our approach to AI governance is integrated into our Enterprise Risk Management framework and our existing policies and procedures. It includes the following:
Data Ethics Policy
ZoomInfo processes data in accordance with applicable law, and the Data Ethics Policy then addresses wider scenarios of what should be done. This Policy established the creation of ZoomInfo’s Data Advisory Board.
Data Advisory Board
A cross-department, cross-disciplinary group of ZoomInfo employees tasked with reviewing novel data use proposals, or other matters relating to data use that cannot be solved through pre-existing procedures and standards, and advising Senior Management. The Board contains members from Legal, Compliance, Data Strategy, Information Security, Human Resources, Communications, and Infrastructure Engineering.
AI Risk Management Framework
ZoomInfo’s AI Risk Management Framework supports the Data Ethics Policy, and is aligned with the NIST AI Risk Management Framework.
Generative AI Security Policy
The Generative AI Security Policy outlines ZoomInfo’s information security controls when utilizing generative AI. It addresses the overall security and protection of intellectual property, sets the standard for responsible use, and instructs users on further monitoring of AI outputs and performance.
AI Use Guidelines
These employee-facing guidelines describe the best practices and recommendations for the ethical and responsible use of artificial intelligence within ZoomInfo, aligning our day-to-day practices with the ZoomInfo Risk Management Standard. These guidelines are designed to help ZoomInfo employees emphasize the responsible building and application of AI into our products and services, thereby promoting the responsible and beneficial use of AI. The guidelines also direct employees to produce AI applications that comply with ethical and legal requirements, protect data privacy and security, and promote fairness and accountability.
All these policy documents align with the overall ZoomInfo governance framework, which includes other policy documents, including its Internal Privacy Policy, Company Values, Information Security Policy, Sustainability Report, and the Code of Business Conduct & Ethics.
AI Use at ZoomInfo
ZoomInfo has utilized AI for many years. The quality of the data in our platform is driven by machine learning technology in combination with oversight from our human data and research teams. These machine learning algorithms have allowed us to more efficiently parse data and make sensible decisions on quality and accuracy.
As AI techniques improve, so have our offerings. Currently, the three main AI use cases at ZoomInfo are as follows:
- Data collection and processing: For example, applying AI to ZoomInfo’s data pipeline to detect valuable information in the extraction process, or identifying and eliminating erroneous data prior to publishing.
- Summarizing and synthesizing data: For example, within our Chorus offering we are able to generate useful meeting summaries and identify action items based upon the content of a conversation. Elsewhere, generative AI may be used to simplify user interfaces and make outputs easier to understand.
- Personalizing the user experience: For example, within our platforms, customers will see how our AI analyzes their market for them, identifies ideal targets, and recommends their next best action.
ZoomInfo’s use of data has and will continue to align with our B2B Sales and Marketing offerings. As such, our use of AI is and will continue to be focused on B2B use cases, and will always consider the fundamental rights, freedoms, and expectations of our customers, and data subjects.
The TRUSTe Responsible AI Certification is the first AI certification focused explicitly on data protection and privacy. Incorporating principles of the EU AI Act, NIST AI Risk Management Framework, ISO:42001, and OECD AI Principles offers a clear framework for companies aiming to comply with core principles of responsible AI. The certification provides ZoomInfo with a distinctive advantage as early adopters in AI data governance and demonstrates responsible AI handling.