Tejasvi Addagada shares a four-stage change management model for successful AI adoption in an organisation.
Tejasvi Addagada, SVP & Head-AI Governance, HDFC Bank
Tejasvi Addagada, SVP & Head-AI Governance at HDFC Bank, believes AI adoption isn’t just about deploying technology, it’s about transforming mindsets, workflows, and governance models. In a conversation with FE CIO, he shed light on how he is leveraging the technology to transform banking operations and drive sustainable digital transformation in a complex regulatory landscape.
How do you manage to ensure data governance that is robust and, at the same time, maintain the agility needed to innovate with AI and Generative AI in a large banking institution like HDFC Bank?
In my experience leading data and AI transformation initiatives, I don’t view data governance and agile AI innovation as opposing forces—they are interdependent pillars. One of the key leadership principles I uphold is that robust data governance and agility in data and AI transformation must be designed to coexist. This requires moving beyond a static one-size-fits-all framework and embracing a contingency-based model for governing data where controls are intelligently aligned to risk, regulatory impact, and the maturity of each use case.
I prefer embedding principles of privacy-by-design, Responsible AI and ethics-by-default into the Generative AI lifecycle, not as overhead, but as foundational enablers. I have operationalised this through smart data catalogs, AI registries, metadata-driven access controls, and observability frameworks that give teams the confidence to experiment and scale responsibly.
Ultimately, I focus on building a governance culture that empowers experimentation-by-design, not exception. It’s this blend of strategic foresight and execution discipline that enables innovation to flourish securely and sustainably.
What are the major obstacles that you come across while implementing AI governance frameworks which guarantee that the use of customer data is ethical, private, and secure?
One of the most significant leadership challenges is reconciling the speed of AI innovation with the depth of control required for robustness, privacy, security, and ethics. It’s easier to follow the approach of assessing, monitoring, controlling and constraining AI capabilities that are going live. Unlike traditional IT systems, AI, especially Generative AI evolves in real time, learning from data and adapting its patterns. This fluidity makes it difficult to apply static, deterministic and legacy control frameworks.
Another core obstacle is making AI systems understandable and trustworthy. With GenAI, it’s often difficult to explain why a model generated a particular response or made a certain decision. It’s not enough for a system to perform well, it must also be understood, trusted, and verifiable. To tackle this, I have spearheaded the design of a Generative AI control environment that embeds a framework that’s aligned to risk appetite basis bias detection, prompt risk testing, hallucination mitigation, and end-to-end auditability, thus ensuring ethical and compliant AI behaviour. Guardrails as enforceable constraints across input, output, dialogue, tools, and custom logic are enforced ensuring that every AI interaction is safe, ethical, compliant, and aligned with business and regulatory expectations.
Finally, cultural transformation is a continuous journey. Embedding responsible AI practices requires more than policy: it demands cross-lines of defence trust and ongoing research. I invest time in aligning privacy, security, ethics, legal, compliance, engineering, and business stakeholders around a shared accountability model because true AI governance is as much about people and mindset as it is about frameworks and tools.
Given your experience, how can enterprises leverage data as a strategic asset while maintaining compliance with evolving data privacy regulations?
As of Asia’s initial data privacy officers, I got to realise that enterprises can unlock data as a strategic asset only when trust and compliance are made foundational. In my work, I focus on embedding privacy-by-design and governance-by-default into existing control environment including change processes, business operational processes, acquisition landscape, data pipelines and designated roles to bring awareness and drive privacy related activities. This ensures that consent, purpose limitation, and retention are enforced automatically. Frameworks like GDPR, India’s DPDP Act, NIST RMF, and DCAM inform this architecture, but real success lies in contextualising them to business priorities.
Through my models and published work including the 'Contingency and Evolutionary Models of Data Governance', I have advocated for adaptive governance that scales with complexity, while empowering innovation. I have helped organisations establish unified controls across data and AI, ensuring that every dataset used, especially in GenAI contexts is ethically sourced, privacy-aligned, and risk-scored based on geography.
Can you tell me the change management strategy you use for the AI-driven digital services adoption across the different business units?
In large enterprises, AI adoption isn’t just about deploying technology, it’s about transforming mindsets, workflows, and governance models. Our change management strategy focuses on human-centric AI adoption with a structured but adaptive approach.
I follow a 4-stage model: Awareness, Co-creation, Enablement, and Assurance.
Ultimately, successful AI change management is about creating a culture where business units view AI not as a technology initiative but as a business capability. It’s about building trust—in the technology, in the data, and in the process.
What role does automation play in your data governance strategy, especially in managing data quality and regulatory compliance at scale?
For me, automation is fundamental to modern ways of governing data as an enterprise asset. We are shifting from static frameworks to contingency and evolutionary models of data governance. In banking, this is essential because the stakes around data quality and compliance are extremely high. This is synonymous to evolution of corporate and technology governance.
I focus on building autonomous governance capabilities directly into the data fabric, right from acquiring data from a customer, and thereby a data analyst access the same data within rights. Automation of governance starts by automating the discovery and classification of data, especially sensitive and personal information, across systems and processes and by defining what data means in context of creation and usage. By continuously mapping data flows and enforcing usage policies in real-time, I ensure that consent, privacy, and ethical considerations are maintained dynamically, not just at the point of data collection but throughout the lifecycle.
On the data quality, I use automation to profile data, assess, monitor and score data continuously. However, operations personnel involvement is crucial. This means data anomalies, drifts, or integrity issues are detected early—before they propagate into AI models or customer interactions. This kind of proactive quality control is essential for maintaining customer trust and operational efficiency.
For regulatory compliance, automation plays a critical role in creating real-time observability and auditability. Controls such as data lineage tracking, retention policy enforcement, and risk monitoring are embedded into pipelines, ensuring that compliance is not a periodic exercise but a continuous process.
I also believe in predictive governance, where AI assists in forecasting potential compliance breaches or ethical risks before they occur. This shifts governance from a reactive to a preventive function something I see as essential in the era of AI and GenAI.
How do you see the future of AI and data governance evolving in the banking sector over the next three to five years, particularly with the rise of generative AI technologies?
I believe the next three to five years will fundamentally reshape AI and data governance function in banking. There will be more alignment to corporate governance, and composite domain of corporate data and AI Governance will evolve. The rise of generative AI is just one part of the shift. The larger transformation will come from the emergence of RAG and agentic AI.
These are AI systems that can autonomously make decisions, trigger actions, and interface directly with tools, skills, complex events and operational processes. In a banking context, that could mean AI initiating credit decisions, customer interactions, or regulatory filings with minimal human intervention. That level of autonomy demands a new layer of governance—not just over data or models, but over behavior and intent.
This will expand the scope of governance from data stewardship to AI behaviour monitoring. It will require controls that oversee not only what AI knows but what it does, why it does, and how that aligns with efficiency of processes, risk appetite, ethics, and regulatory frameworks.
I expect regulators to move swiftly toward setting AI and agent governance standards, covering aspects like explainability, bias mitigation, output validation, and safe operational boundaries. Banks will need to invest in autonomous governance systems, where data quality, ethical AI behaviour, and compliance are monitored continuously, in real-time, without slowing down innovation.
Ultimately, I see a future where governance and AI are intertwined, not as a control tower and operator, but as co-pilots enabling responsible and scalable digital transformation.
By continuing you agree to our Privacy Policy & Terms & Conditions