AI adoption across financial services is accelerating at a pace that governance hiring is not matching.
Over the past 12–18 months, firms have invested heavily in building AI capability, hiring data scientists, engineers and product specialists to drive automation and competitive advantage. What is now becoming clear, however, is that governance capability has not scaled alongside it. A gap has opened between those building AI systems and those responsible for overseeing them, and it is already influencing hiring behaviour across the market.
Search activity is beginning to reflect this shift. While demand for technical AI talent remains strong, there has been a noticeable increase in mandates focused on oversight and control. Firms are coming to market for AI governance leads, model risk professionals with machine learning exposure, and compliance hires tasked with shaping policy and frameworks around AI usage. These roles are typically positioned at VP to Director level, with increasing senior visibility, yet the volume of hiring still falls short of the scale of deployment.
A key issue is that many firms are hiring before they have fully defined what these roles should look like. Ownership of AI risk remains unclear across much of the market. In some organisations, responsibility sits within Model Risk; in others, it is split across Compliance, Enterprise Risk or Data functions. This lack of alignment is feeding directly into hiring challenges. Mandates often evolve mid-process as internal stakeholders refine scope, reportinglinesor seniority, which slows decision-making and weakens candidate engagement.
From the candidate side, the market is equally constrained. The most in-demand professionals are those who can operate across both technical and regulatory domains, combining experience in model risk or quantitative disciplines with a working understanding of compliance frameworks. These profiles are not only limited in number but are also highly selective. Many are already embedded in hybrid roles and are unlikely to move without a clearly defined mandate, strong senior sponsorship, and the opportunity to shape governance frameworks rather than inherit incomplete ones.
This is where hiring processes are beginning to break down. Where firms cannot clearly articulate ownership of AI risk or position the role within a credible structure, candidates disengage early. By contrast, organisations that offer clarity - even within evolving structures - are securing stronger talent and moving more efficiently through processes. Reporting lines, in particular, are becoming a decisive factor, not because there is a single “correct” model, but because candidates are increasingly focused on where influence and accountability sit.
Regulatory momentum is adding further pressure. Supervisory focus on AI is increasing, particularly around transparency, accountability and the use of automated decision-making in client-facing contexts. Despite this, many firmsremainreactive in their hiring approach, waiting for more defined regulatory frameworks before investing significantly in governance capability. From a recruitment perspective, this creates a timing risk. As expectations crystallise, demand for experienced professionals will accelerate into an already constrained talent pool, intensifying competition and extending hiring timelines.
The firms responding most effectively are those treating this as a forward-looking hiring priority rather than a compliance afterthought. They are making early hires into AI governance and model risk, often before structures are fully finalised, and are comfortable allowing those individuals to define and build the function. Crucially, they are positioning these roles with clear senior backing and framing them as strategic, rather than purely control-focused, which is resonating strongly with the candidate market.
What is emerging is not a temporary imbalance, but a structural shift. AI is no longer just creating demand for technical capability; it is redefining what effective risk and compliance functions need to look like. The gap between build and oversight is already here, and it is widening as adoption accelerates.
For firms investing in AI, the implication is clear. Governance hiring cannot follow implementation. It needs to move alongside it or, increasingly, ahead of it.
From a recruitment standpoint, this is one of the most significant developments currently shaping the compliance market. The organisations that recognise it early, define their mandates clearly, and engage the right talent now will be far better positioned to scale AI with confidence.
How Rutherford Can Help
Rutherford is a leading compliance recruitment agency who has been specialising in financial crime and compliance recruitment for over a decade. We work solely with financial and legal services firms, meaning that we have a deep understanding of the requirements needed for a SMF16 or SMF17, especially when it comes to cultural fit.
Whether you are a SMF16 or SMF17 looking for a new role or a financial services firm looking to secure their next FCA-regulated hire, reach out today to our financial crime and compliance recruitment specialists for a confidential conversation.

