What is the Kerala HC’s new AI policy all about? And what more is needed?
The new AI policy is rigorous, codifying AI’s use as an assisting tool in our courtrooms. But how far can India’s courts go without robust digital infrastructure, judicial education, and focussed investments by the Supreme Court and BCI?
Harsh Gour
Published on: 22 July 2025, 12:19 pm

INDIA’S COURTS are severely overburdened. As a recent report notes, over 50 million cases remain pending in India’s justice system. At the current pace, it would take “over 300 years” to clear the backlog. This strain arises from lengthy manual procedures, shortages of judges and staff, and logistical delays.
In this context, artificial intelligence (AI) offers promise: AI-driven speech-to-text tools can help transcribe judges’ dictations and testimony, and case‐management algorithms can streamline workflows. For example, the Delhi courts have piloted “Adalat AI,” a machine-learning system that lets judges dictate orders for automatic transcription and summary. Such active steps aim to relax the justice system from manual clerical work, freeing judges to focus more on adjudication. However, the use of AI in courts also raises critical concerns about privacy, bias, and accountability. To realise the benefits (faster case processing, wider access to legal aid, predictive analytics, etc.) without undermining justice, clear policies are essential.
Without guardrails, AI use could violate litigants’ confidentiality, erode trust in verdicts, or entrench unfairness. As the Kerala High Court’s policy observes, AI “can be beneficial, but…their indiscriminate use might result in negative consequences, including violation of privacy rights, data-security risks, and erosion of trust in judicial decision-making”. In short, formal guidelines are needed to ensure AI promotes – not compromises – the rule of law and due process.
Without guardrails, AI use could violate litigants’ confidentiality, erode trust in verdicts, or entrench unfairness.
Kerala High Court policy
On July 19, the Kerala High Court issued a pioneering “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary” to steer AI adoption. It applies to all district judges, judicial magistrates, their staff and interns in Kerala. The policy covers all AI tools – including generative language models – and any devices (court PCs, personal laptops, smartphones) used in judicial work. In practice, only AI tools formally approved by the courts (“Approved AI Tools”) may be used for judicial tasks.
Kerala’s policy insists on strict safeguards. Crucially, “under no circumstances AI tools are [to be] used as a substitute for decision-making or legal reasoning”. The policy requires that AI use must never compromise core judicial values like transparency, fairness, accountability and confidentiality. It warns that many AI systems generate errors or biased results, so “extreme caution” is mandated – judges must meticulously check any citations or translations produced by AI tools.
Other key provisions focus on data security and process control. Because most AI services are cloud-based, the policy forbids inputting sensitive case data (personal identifiers, privileged communications, etc.) into unapproved tools. Unencrypted cloud services are to be avoided except when an approved solution exists. Courts must keep audit logs of all AI usage – tracking which tool was used and how the human reviewer validated it. Importantly, the policy mandates training: all judicial staff are to attend programs on the legal, ethical and technical aspects of AI. Any AI malfunctions or misuse must be reported up the chain for review. Violations of these rules can bring disciplinary action. The High Court also promises to update the policy as technology evolves.