Abstract / Description: As generative AI transforms software development, enterprises need to rethink how they build code trust. Traditionally, developers are accountable for code authorship and testing, but AI-generated code introduces new challenges. Without approved processes, the use of LLMs to generate code can expose organizations to IP leakage, security risks, and lower code quality.
This session explores how businesses can integrate AI safely into their Software Development Life Cycle (SDLC). We will discuss strategies for LLM approval, tracking, and performance monitoring. Attendees will learn how tools like Sonar can ensure transparency, maintain high-quality standards, and build trust in AI-driven development workflows.
Intended for: SonarQube Server and SonarQube Cloud users