From ChatGPT to GitHub Copilot, generative AI is here, and with it has come scores of new technology that’s making it easier than ever for developers to complete coding tasks faster, innovate, and create better software.
But this speed can come at a cost, especially if it’s sacrificing quality in the process.
In a recent study done by Salesforce, 79% of senior IT leaders reported concerns that these technologies bring the potential for security risks. And 99% of them say that they need to take measures to equip their organizations to use generative AI successfully and responsibly.1
In this post, we’ll cover the complications of introducing generative AI tools to the SLDC—and how developers can overcome them by refocusing on risk management and AI bug detection along with day-to-day code creation.
Velocity is introducing a new category of problems
Think of it this way: You’re using GitHub Copilot code completions in your editor. It’s helping you automate tasks and move faster (up to 55% faster) than ever before. But given this increase in output, there are sure to be downstream effects in your SDLC. Don’t forget that more code means more code review, too.
With all of this LLM-imposed velocity, it’s almost certain that there will be an increase in the number of bugs released into production. This can not only create day-to-day bottlenecks, but it can have long-term impacts on users, your organization’s reputation, and your bottom line. Think: Just a few latent pages can make you lose out on a lot of money.
“Even if [AI] gets it right nine times out of 10, if one time out of 10 you’re shipping bugs in your code, that’s pretty bad.” – Ravi Parikh, Co-Founder and CEO of Airplane2
Big changes on the horizon
Today, it’s common for teams to have tooling that creates low-friction deployments. But once you bring in generative AI, everything changes. That’s because those tools and processes were implemented when code was exclusively written to accommodate the velocity of a human—not a human with the help of AI.
Reckoning with this increased volume is going to create a major sea change in the SDLC. To manage it smartly, you’ll need a new breed of tools to assist developers, including code review tools and observability tools to monitor AI-generated code separately.
Evolving your role
This is the sea change we’re talking about. With GenAI tools speeding up code velocity, developers need to reallocate their time to code review and risk management. In a sense, they need to work from both the cockpit and the control tower: creating code and ensuring that risk is managed.
The best way to help our industry adapt to (and make use of) this surge in new code is to shift left with predictive tools and provide relevant context about how a change is going to perform in production. These tools will help engineers understand which pull requests require concentrated amounts of their time and risk remediation.
Update your stack
“Now, with the acceleration wrought by generative AI as well as other emerging technologies, tech leaders must build long-term strategies to address the growing demands on the technology organization.”3
This is where Shepherdly comes in.
Shepherdly helps devs manage and prioritize pull requests based on a risk score that ranges from 0-100, and represents the likelihood that each pull request will cause a bug. The higher the score, the more likely that a bug will occur.
Here’s how it works:
- Shepherdly scans all of your pull requests and associated metadata, including lines changed, the file’s bug fix history, number of reviewers, and more.
- We classify bug fixes and regular changes.
- When a bug fix is selected (via human-applied labels like an issue key or number or NLP classification), we assume the vast majority of line changes are problematic. From there, we build a line-by-line lineage of the git history to find the pull request that introduced the buggy code.
- We take all of the pull request data and bug fix labels and build a predictive model custom to your team.
Big picture, Shepherdly can provide a quantifiable metric to justify when your team needs to slow down before a bug costs you money and creates a negative experience for users and customers. Don’t forget that remediation is expensive, so applying this practice with precision can actually make you faster over all.
Ready to keep up with the pace of AI while keeping your code secure? Request a demo today.
1“IT Leaders Call Generative AI a ‘Game Changer’ but Seek Progress on Ethics and Trust.” Salesforce, 6 March 2023, https://www.salesforce.com/news/stories/generative-ai-research/. Accessed 3 July 2023.
2Craig, Lev. “The promises and risks of AI in software development.” TechTarget, 26 April 2023, https://www.techtarget.com/searchitoperations/feature/The-promises-and-risks-of-AI-in-software-development. Accessed 30 June 2023.
3Mark, Fiona. “Generative AI Will Change The Future Of Technology Organizations.” Forrester, 12 June 2023, https://www.forrester.com/blogs/generative-ai-will-change-the-future-of-technology-organizations/. Accessed 3 July 2023.