Here’s how I think AI can help developers collaborate (and its limitations)
Despite the leaps and bounds in AI technology, there’s still a lack of “deep” implementation of AI.
AI sometimes feels like a supercharged toddler — enthusiastic but not always helpful, especially with non-shallow work.
And I think that’s why AI for collaboration, especially for developers, isn’t a huge deal – or it wasn’t until recently.
Adoption’s already happening for technical tasks. What about collaboration?
In general, there’s a big divide in how engineering teams are approaching AI adoption.
Some teams are investing heavily in AI, leveraging it as a competitive advantage and reaping its rewards.
This report from mid-2023 showed that software development teams adopting AI have already seen a 2.5x speed increase in their software development lifecycle.
In other words, assuming an engineer works 8 hours per day, they are reclaiming 3 hours by using AI.
These teams expect a further speed increase of 3.5x in the next year. That’s 5 hours back, every day, per engineer.
According to the same report, a surprising 30% of software teams have no plans to adopt AI.
These teams aren’t all merely abstaining — some actively avoid AI, making it a part of their rules and culture. To put it another way, on average, 30% of engineering teams are losing 3 hours per day to their AI-adoping competitors.
But this survey’s not specific to collaboration. It’s covering AI for things like code implementation, code reviews and documentation.
Yet we all know that, at a strategic level, it isn’t just the speed of code implementation or code reviews that send projects sideways. In fact, often it’s not the practical development at all.
It’s the sheer complexity of development operations. Dependencies are complicated, we don’t (can’t?) perfectly keep Jira up to date, strategic opportunities pass us by, risks get missed.
So imagine the gains if AI could enable the kinds of gains we already see in practical development…
So can AI help with collaboration or not? What might the natural limits be?
The AI tools available to developers for collaboration are generally, right now, excelling at mostly structured, narrow tasks.
Search, summarisation, basic pattern recognition, natural language processing — every provider is rushing to implement shallow AI into their existing tools.
While this has its uses, the value is limited, and some applications of AI — like sentiment analysis, for example — are more novelty than utility.
That stuff, to me, is nothing to be overly excited about. There’s a reason there are so many shallow AI tools out there. Shallow AI is comparatively easy — just plug into GPT-4, feed it some data. Input, output.
The AI that is going to be game-changing is reflective, proactive artificial intelligence (AI) that can grasp the bigger picture like — or actually, better than — a human can.
AI like this wouldn’t just summarise your Jira tickets. It would draw meaningful connections between disparate threads of work, reflecting on them like humans do and curating and presenting insights, opportunities and risks, just like humans do.
It would overcome the inherent human limitations of memory and awareness.
And this approach acknowledges a limitation of AI that, in my opinion, will (and should) stay the same for some time yet. That limitation is the ability of AI to make truly creative strategic decisions.
This sets the scene for where I think collaboration with AI can go — beyond just saving time, rather, doing something humans can’t really do, and serving us, the humans, with exactly what we need to do our jobs.
This might sound ambitious, but it’s not only feasible, it’s well within our reach given the current state of AI technology.
It’s just that a simple integration with GPT-4 isn’t enough — not nearly enough. This kind of processing requires complex architecture and long-term “memory”. But it’s definitely possible.
Getting AI right for development operations
Of course, in development, we need AI that not only simplifies our tasks but understands the ecosystem it’s meant to inhabit.
A tool for software development teams is going to need to play nicely with the tools engineers use, like GitHub and Jira.
Generic AI tools fall short. They can search, summarise, and categorise, but can they parse a pull request and figure out that a bug fix in the pull request is going to unblock someone on the front-end team, and notify them they can crack on with it?
Specialised AI-powered tools designed for developers need to understand the nuances of our workflow, our conventions, our project’s unique ecosystem, and also the patterns of communication we use.
Communication is innately messy, and it takes an incredible AI implementation to unpick the knots of communication across people, teams, projects and tools.
If it can do that, then we will have created an environment in which our AI not only works with us but, to some extent, thinks with us.
By interfacing seamlessly without tools, it would contextualise its understanding of our workflows and amplify our efficiency on multiple fronts.
The future of AI in development operations seems to lie not just in the automation of tasks but in deep, thoughtful integration with our workflows and tools. AI that takes into account our unique ecosystem, understands it, reflects on it, and most importantly, adds meaningful value to it.
Stepsize AI is coming
I’m fascinated by AI, and have been for years. My team at Stepsize have been sitting at the intersection of AI and dev tools since 2015.
We think we’ve cracked the architecture required for deep, data-driven collaboration tools and this year we announced Stepsize AI, the operational intelligence engine for people who design, create and build software.
I think a good way to describe what it can do for you is that it gives you AI-powered intelligence for every situation with accuracy.
- Replace your daily standup with daily updates that surface anything that demands your attention
- Send a weekly team update so you can cancel your status update meeting
- Generate an executive-level summary for your CTO on-the-fly — with a chance for you to edit, of course
- Alert you when virtually anything happens, or when it spots risks
- A whole lot more…
Our beta testers have called it “spooky accurate” — and I don’t mind sharing that, because it’s also super secure and your data never, ever trains an AI model.