Oct 17, 2024
Jennifer Miller is Grammarly’s General Counsel. She focuses on enabling Grammarly to grow and innovate while carefully managing business risk. Her responsibilities include navigating AI and regulation and scaling the company’s managed business.
Suha Can is Grammarly’s CISO and VP of Engineering, leading global security, privacy, compliance, and identity for the company. He’s dedicated to securing the data of Grammarly’s over 30 million users and 70,000 teams at enterprises and organizations worldwide.
As AI continues to reshape the tech landscape, companies like Grammarly are navigating new challenges in balancing innovation with privacy and security. With advanced AI tools, businesses can improve user experiences, but they also need to manage privacy and security risks that come with it. Grammarly, known for its communication assistant that leverages AI, strongly emphasizes user trust by embedding transparency and user control at the core of its privacy and security strategy. So, how can companies in the AI space adopt similar practices, innovate responsibly, and stay ahead of evolving privacy and security risks?
Grammarly champions transparency and has built a privacy and security program centered on user trust and control. By establishing governance frameworks, regularly reviewing their products for privacy, security, and AI-related risks, and maintaining collaborative communication between legal and technical teams, Grammarly proactively mitigates risks while staying compliant with regulations. The company also offers clear privacy practices through its public-facing web pages and ensures its contracts with customers and third-party vendors reflect the same principles of transparency.
In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Jennifer Miller, General Counsel, and Suha Can, CISO, of Grammarly about how the company has built a privacy and security program centered on trust and transparency. Jennifer and Suha discuss how they navigate AI advancements and regulatory challenges by prioritizing user control, conducting privacy and security audits, and fostering collaboration between legal and technical teams. They also emphasize the importance of proactive governance and responsible AI practices to keep pace with evolving regulatory landscapes.