Artificial Intelligence in the South African Financial Sector Survey

Artificial Intelligence in the South African Financial Sector Survey

Executive Review

The Financial Sector Conduct Authority (FSCA) and the Prudential Authority of the South African Reserve Bank (SARB) conducted a survey of AI adoption within the South African financial services sector in 2024.  The survey report, released in November 2025 and titled Artificial Intelligence in the South African Financial Sector, represents one of the first regulator-led attempts to map AI adoption across banks, insurers, asset managers, and credit providers using primary data.

Its importance lies less in the novelty of its findings than in the institutional and regulatory signal it emits.

1.   What the AI Adoption survey is genuinely good for

While the report surfaces familiar themes, such as the sector’s leadership in AI experimentation, heightened concerns around customer data privacy, cautious investment posture, and the centrality of fraud detection, its core contribution lies elsewhere.

As the first regulator-initiated, sector-wide engagement on AI adoption, the survey performs an important baseline-setting and signalling function.  It facilitates the emergence of a shared vocabulary around AI, reveals prevailing perceptions and attitudes, and, critically, signals regulatory intent in a sector which is already at the frontier of AI experimentation.

In this sense, the survey confirms that AI is no longer a speculative play within the South African financial services institutions.  Instead, experimentation is underway, governance questions are being surfaced, and boards and executives are increasingly implicated.

The survey is particularly useful as:

  • A regulatory signpost indicative of supervisory awareness, concern and future intent;
  • A high-level mapping of where AI is being applied within institutions (e.g. sales and marketing, IT, risk management);
  • A strong confirmation that data governance and cybersecurity dominate AI governance concerns; and
  • Further confirmation that skills constraints, organisational readiness, and governance capacity are binding limitations.

However, as an empirical assessment of AI maturity within the sector, the survey exhibits material limitations.

2.   Key shortcomings and methodological limitations of the AI Adoption Survey

In our reading, the survey falls short of its analytical ambitions primarily due to methodological design choices.

Firstly, the survey repeatedly conflates AI awareness with AI maturity.  Respondents were asked to confirm the existence of governance frameworks, ethical principles and data management protocols; however, the survey does not interrogate their AI-specific depth, operationalisation or effectiveness.  As a result, legacy IT, data, and risk governance structures are frequently interpreted and reported as AI governance mechanisms.

Secondly, the survey enforces a strict distinction between “traditional AI/ML” and “Generative AI”.  However, in practice, these technologies are layered, interdependent, and often deployed in combination.  In fact, one of the graphics inserted in the report by the authors explicitly conveys this point.  Treating them as parallel domains leads to inconsistent rankings of use-cases, benefits and risks, thereby weakening the interpretability of responses across related questions.

Thirdly, several questions, and particularly those dealing with data governance and AI lifecycle concepts, assume uniform technical literacy and semantic precision among respondents.  Concepts such as data representativeness, ML-specific lifecycle management, and metadata governance are highly specialised and unfamiliar to many of the respondents.  Therefore, related responses should be interpreted as aspirational rather than evidentiary.

Fourthly, the survey captures ambient normative agreement on ethical principles and not empirical ethical tensions.  While ethical principles such as fairness, transparency and accountability are widely endorsed, the survey does not sufficiently surface trade-offs, organisational dilemmas or contested practices, particularly regarding AI’s impact on employment and labour displacement.

3.   Assessment of the survey’s conclusions

Although the survey’s headline conclusions are directionally sound, they somewhat overstate the depth and maturity of AI adoption within the sector.

A more calibrated reading suggests that:

  • AI adoption is real but uneven, incremental and experimental, rather than transformative;
  • AI impact remains concentrated in IT, analytics, risk, and customer-facing functions;
  • Existing governance structures are largely legacy frameworks, not AI-native constructs;
  • Accountability for AI is diffuse and insufficiently institutionalised; and
  • Regulatory concerns reflect internal capability gaps as much as external constraint.

Taken together, the survey points to a sector entering a compliance-led phase of AI adoption rather than a phase of strategically embedded AI transformation.

4.   Conclusion

The survey is timely, relevant, and institutionally significant, but not empirically groundbreaking.  Its value lies in framing regulatory discourse rather than establishing AI maturity.

Read carefully, it reveals a financial services sector that is AI-aware, risk-conscious, and particularly attuned to data governance and cybersecurity, but not yet structurally prepared for scaled, high-impact, AI-driven transformation.

    Summit Ai - Logo Primary

    BOOK YOUR DISCOVERY CALL

    Fill out the form below, and we will be in touch shortly.

    Contact Information
    Services Required
    Preferred Date and Time Selection
    Summit Ai - Logo Primary

    BOOK YOUR DISCOVERY CALL

    Fill out the form below, and we will be in touch shortly.