In 2023, some engineering teams learned how to use artificial intelligence (AI) to dramatically enhance how they do things as their companies faced growing economic pressures.
70% of teams have adopted some AI tools, with 30% implementing an AI adoption strategy.
Those with an adoption strategy have reported a massive 250% increase in development speed. In other words, they’re turning what used to take 8 hours into things that take 5 hours.
That’s according to Stepsize’s AI Adoption in Software Survey 2023.
But most teams still don’t have a confident AI adoption strategy, despite mounting evidence it can unlock significant competitive advantages.
So in this article, I want to explore essential components of an effective AI strategy.
1. AI-ready culture
We have to start by making sure our people and organisational structures are ready for AI adoption.
First and foremost, someone needs to take responsibility for the AI adoption strategy.
I’ve seen successful AI adopters drive this with a dedicated taskforce — or a designated champion in smaller businesses.
The composition of this taskforce is intrinsically going to be linked to the size and structure of the business, but might include:
- Data security leaders
- Representatives from teams, such as engineering, data and UX
- Industry domain experts (e.g. finance or healthcare)
- Compliance, ethics officers and legal counsel, if appropriate
Not everyone has to be an AI expert — far from it — you want perspectives from various viewpoints. Regardless of size, the essential factor is having individuals committed to becoming knowledgeable about AI and its implications for the business.
AI adoption is far more likely to succeed when you have top-down endorsement. This involves active leadership and accountability from the highest levels of the organisation.
Like anything else, when leaders champion AI initiatives, it signals the importance of these projects. That makes securing necessary resources and getting everyone aligned that much easier.
2. AI ethics, governance and compliance
AI adoption, ethics, governance, and compliance aren’t optional for responsible and sustainable strategies.
But for most businesses, they need not be prohibitive or even complicated.
A mistake I frequently see businesses make is to take a broad-brush approach towards their AI guidelines that lack any nuance or strategy.
Building confidence in AI adoption hinges on how well ethical and compliance issues are addressed. One practical method is to categorise AI projects based on risk. For instance, using AI to assist in understanding code is a lower-risk endeavour compared to employing AI for user data segmentation, which involves sensitive personal information. High-risk projects demand rigorous scrutiny and robust governance frameworks, while low-risk projects can be managed with more streamlined procedures.
Although this is going to be oversimplified, the thought process is something like this:
1. “What constitutes an unacceptable risk?” — nothing might fall into this category. But if you’re, say, a healthcare organisation, then having AI that has access to user data might be an unacceptable risk.
2. “What data is low-risk to work with?” — some kinds of data might constitute little risk, such as parts of your codebase, marketing analytics data, or something else.
When you know those two things, you’re in a good spot to build a tiered strategy. For example:
- High risk = exposing data to AI is unacceptable
- Medium risk = AI projects subject to thorough approval process — AI tool must meet all compliance standards
- Low risk = Department managers able to approve projects — experimental AI tools allowed meeting basic compliance standards
This ensures your most critical data is protected without holding teams back on the opportunity to unlock substantial AI-driven long-term competitive advantages.
You’ll need a clear policy on how team members should use AI. As with any policy, it should be readable and managers should help team members spot the key takeaways. When developing the policy, be sure to outline user responsibilities, including data handling and compliance with any legal and regulatory standards. Establish procedures for monitoring AI usage and reporting concerns.
A word on compliance and ethics
Legal and regulatory adherence is, obviously, non-negotiable. When in doubt, get advice.
Ethical considerations in AI should matter just as much. Questions around fairness, bias, transparency, explainability, and social/environmental impact should considered with any AI project.
3. Strategic choice of experiments
We need a strategy for rolling out the right projects.
Obviously, AI is moving even faster than we’re used to in the tech world. So, getting this right involves a balance between setting clear goals and remaining open to emerging opportunities and ideas.
A well-structured approach to experimenting with AI can significantly enhance the chances of successful implementation and integration into your business operations.
The first step in this process is research.
Understanding what AI technologies and tools are available in the market is essential. What’s out there? What’s changing? What might your competitors be using?
The next question to address is alignment with your business strategy. How do the potential AI solutions fit into your existing business model? Will they complement and enhance your current processes or require a complete overhaul? The key here is to identify AI applications that not only promise efficiency and innovation but also align seamlessly with your business objectives and strategy.
Pilot projects are instrumental in this phase. They serve as a testing ground for your ideas. Where it makes sense, define clear outcomes or metrics for what success looks like for these pilots and be prepared to learn from both successes and failures.
Smaller projects often act as proof of concept (POC) for larger initiatives. They allow you to test the waters without committing extensive resources. Once these smaller projects show promise, they can be scaled up.
The most successful AI projects are characterised by their agility and speed. Having a flexible budget and empowering decision-making at lower levels can accelerate the development and implementation of AI projects. This agility allows for quick adjustments based on real-time feedback and evolving requirements.
4. Data quality
If we want successful AI adoption, we also need data quality, availability, and integrity.
Really, of course, we should have this anyway! Poor data hygiene disrupts business processes at all levels. Quality data practices should be the norm.
This applies wherever we want our AI projects to run, whether that be optimising operations, refining data analytics, enhancing customer engagement, and more.
As an example, if we want to make operational improvements in the engineering team, we’re going to want engineers to be making well-structured, detailed and clear commit messages, comprehensive descriptions of tickets in project management tools like Jira, and maintaining visibility of conversations.
Tried this AI for reporting on product development?
Want to know what happens in product development without chasing people for updates or lengthy status update meetings?
Stepsize AI lets you report on your product development effortlessly.
It generates automated, context-rich weekly updates based on your issue tracker activity in Jira or Linear.
It’s the perfect way to keep your stakeholders aligned.
I think you’ll love…
- Stunning reports. Summarises your product development activity with the perfect level of context and detail.
- Charts and Data: Visualise progress with a range of charts and actionable metrics, including velocity, allocation and completion.
- Focus on Impact: Keep goals at the front of everyone’s minds, and focus on what matters.
- Security First: Your data is safe, and never trains AI.
Setting up a pilot takes minutes. You’ll need to create an account, integrate with your issue tracker (these things take about 2 minutes) and you’re off!
I’d love for you to try it out.