Copyright – G. Sarin 2026
“A computer can never be held accountable, therefore a computer must never make a management decision.”
– IBM Training Manual, 1979

This blog will examine the main legal risks to the deployment of an AI-engine as an AI toolkit in statutory adjudication in the UK and review risk mitigations that will need to be brought in.
Assume that ABCD Plc has launched this toolkit as a one-stop service for the resolution of construction disputes in the UK and the toolkit offers e-disclosure, can be used to draft correspondence, undertake legal research and even draft submissions. The claim is that it has been lawfully trained on data from the TCC (Technology and Construction Court) as well as expert witnesses and that this toolkit can generate expert reports on delay and quantum. Furthermore, it is contended that An AI-engine can substitute an adjudicator or arbitrator by rendering swift decisions, saving time and costs.
Statutory adjudication in the UK serves as a rapid, binding, and economical method for resolving disputes related to construction contracts. This process was established by the Housing Grants, Construction and Regeneration Act 1996 (Construction Act). This Act was the first in the world to introduce this feature and has been instrumental in establishing similar principles across the world for statutory legislation. It provides parties with an unalterable right to submit disputes to an unbiased adjudicator at any point, guaranteeing a quick resolution, typically within 28 days. This is done under a “pay first, argue later” principle to ensure the continuity of project cash flow, with decisions upheld by the courts.
Typically, this remedy is used for disputes involving interim or final payments, delay and disruption claims, extension of time claims, defects and breach of contract. The binding and enforceable nature of adjudication stems from Macob v Morrison where Dyson J established that parliament had intended an element of ‘rough justice’. He held that, by imposing a provisional and speedy dispute resolution procedure on the construction industry, Parliament clearly intended an element of “rough justice” and a process in which mistakes and injustices were more or less bound to happen.
The principle of judicial non-intervention in adjudicator’s decisions was affirmed in Bouygues v Dahl-Jensen the first adjudication case to come before the Court of Appeal. The consequence of this ruling is that a decision made by an adjudicator will be upheld if the adjudicator has addressed “the correct question in an incorrect manner”; however, it will not be upheld if the adjudicator has “addressed the incorrect question” (as articulated in the context of arbitral rulings by Knox J in Nikko Hotels, and subsequently referenced in Concept Design v Isobars.
In other words the adjudicator’s ruling will be upheld if there has been a procedural, factual, or legal error concerning a dispute that remains within the adjudicator’s jurisdiction. But will not be upheld if the ruling was beyond the adjudicator’s jurisdiction. The principles of enforcement were confirmed by Jackson J in Carillion v Devonport, a decision upheld by Chadwick LJ in the Court of Appeal (in Carillion v Devonport).
Jackson J held that the adjudication process does not entail a conclusive resolution of the rights of the parties (unless there is mutual consent) and enforcement should be viewed within this framework. The Court of Appeal has consistently highlighted that adjudicators’ rulings are to be enforced, even if they stem from procedural, factual, or legal mistakes. In instances where an adjudicator has overstepped their authority or significantly violated the principles of natural justice, the court will refrain from enforcing the ruling. Judges need to be vigilant in scrutinising technical defences with a level of skepticism that aligns with the objectives of the Construction Act 1996. Any errors in law, fact, or procedure made by an adjudicator must be critically assessed before the court can determine that such errors represent an overreach of jurisdiction or serious violations of natural justice principles.
It is now widely recognised that the method of enforcement does not differ based on whether the adjudication is statutory or contractual. This conclusion is derived from the ruling in Amec v Thames Water, in which Coulson J determined that there was no fundamental distinction in enforcement proceedings between a contractual adjudication (as exemplified in that case) and a statutory adjudication under the Construction Act 1996. In both scenarios, the adjudicator’s decision is only provisionally binding on the parties, and the court is not permitted to assess the correctness of that decision; rather, it will evaluate whether the adjudicator had the jurisdiction to make a decision and whether that decision was arrived at through a fair process.
Section 9 of the TCC Guide outlines the process for enforcing the decisions made by adjudicators. Section 9.1.1 of the TCC Guide states that the TCC is typically the court responsible for the enforcement of an adjudicator’s decision, as well as for any related matters concerning adjudication.
The question is whether submissions by AI driven tools can be held to be legally valid and a further question is to test whether replacement of an adjudicator or arbitrator is possible or even desirable considering ‘natural justice’ and jurisdiction principles noted above.
PD57AD permits the use of technology-assisted review (TAR) Disclosure workflows under it must be consistent, auditable, and defensible, thus allowing for scrutiny or challenge in instances of disagreement. However, we encounter a fundamental issue: how can artificial intelligence be utilised responsibly in court-ordered disclosure activities when there is an absence of a clear legal or procedural framework to adhere to?
AI tools can assist with a number of tasks such as document examination, analysing extensive amounts of electronic data to pinpoint pertinent, privileged, or confidential information. Producing document summaries or constructing case timelines with references, enabling legal practitioners to quickly grasp essential information. Automatically sorting documents according to case-specific standards, including relevance, privilege, or the identification of personally identifiable information (PII). Enabling legal teams to pose inquiries regarding a dataset in natural language to evaluate hypotheses and swiftly grasp the dynamics of the case.
The above does not account for Generative AI engines, for which the ILTA has produced a guide. The guide emphasises clarity, accuracy verification, and human oversight, highlighting that GenAI is intended to assist rather than supplant legal judgment.This stance is particularly significant considering the Divisional Court’s rulings in Ayinde v Haringey. Ayinde established that solicitors and barristers are entirely responsible for the content produced by AI and could encounter regulatory penalties if they do not ensure its accuracy.
The court indicated it would take further action, emphasising that the misuse of AI could have significant consequences for the administration of justice and public trust in the legal system. It urged those in the legal profession with leadership roles, such as heads of chambers and managing partners, as well as regulators of legal services, to implement practical and effective measures immediately. These measures must guarantee that every individual currently offering legal services in this jurisdiction, regardless of where or when they were qualified, understands and adheres to their professional and ethical responsibilities, as well as their obligations to the court when utilising AI.
“At least with computer algorithms, one still has human agency in the background, guiding processes through admittedly complex computer programming. Still more profoundly, however, how should legal doctrine adapt to processes governed without human agency, by artificial intelligence- that is, by autonomous computers generating their own solutions, free from any direct human control?’ (Lord Sales)
An AI-engine would then be suitable as a tool for e-disclosure, correspondence and legal research, including assisting with the production of quantum or delay analyses – but not the final report, which must have human intervention to ensure ‘hallucinations’, ‘biases’ and accountability concerns are dealt with, otherwise the enforceability of any award could be challenged.
To fully appreciate the scope and scale of the legal intervention proposed by An AI-engine it is necessary to review the the legal framework that it is proposing to work under. The use and implementation of AI technology is not new and has been contemplated for several decades. By the 1960s, the application of AI in the legal field was already under consideration and discussion. Reed C Lawlor, a member of the State Bar of California, predicted that computers would eventually be capable of analysing and forecasting judicial outcomes by inputting a set of facts into a machine that contains stored cases, legal rules, and reasoning principles.
While AI possesses significant transformative potential and offers numerous opportunities within construction law, it also presents critical considerations and challenges that users need to recognise and address. The technology is robust yet still evolving. A few key points to consider are related to privacy, AI hallucinations and ethical considerations.
When using publicly available platforms (like ChatGPT), any data that is uploaded or input into the platform is not private. This information may be accessible and possibly utilised by others. It is crucial to note that platforms such as OpenAI’s ChatGPT do not offer secure, private, or confidential settings. While there are private platforms that exist, their privacy and security capabilities should be carefully assessed prior to use.
Generative AI platforms are presently at a point where they can “hallucinate” or produce inaccurate information. Consequently, these platforms should be viewed as providing access to a highly sophisticated and intelligent assistant, whose results necessitate human verification and validation. While e-disclosure, correspondence and legal research can form part of AI-led activities, drafting submissions, production of quantum or delay reports must be considered to part of GenAI aspect of this AI-engine.As established in the analysis, these features of any AI-engine must be used with human oversight or not at all.
There are ethical considerations to take into account as well. As artificial intelligence systems are developed using historical data that can perpetuate systemic issues, there are valid concerns about bias and fairness. It is crucial to guarantee that AI functions as an instrument for justice instead of a method for worsening current problems, which requires meticulous calibration and supervision. Software programmes will reflect the biases of the draftsperson. AI will do likewise but as it educates itself by its experience it has the potential to reinforce those biases and imbalances and indeed develop new ones. (J Tackberry) This would then appear to breach principles of natural justice.
“When it comes to ethical dilemmas,” says Pearce, “AI can’t do it.” This is because intelligent tools naturally seek the most efficient path, not the most ethical. As a result, any decision involving ethical questions or concerns should include human oversight.
Adjudication is an essential process for upholding financial stability and efficiency in the construction sector. The message is consistent across various jurisdictions: Prompt enforcement of adjudicator rulings guarantees the continuation of projects and the payment of contractors. Ultimately, the effectiveness of adjudication relies not only on the presence of adjudicators but also on the assurance that their decisions have immediate and dependable legal authority. As evidenced by jurisdictions globally, the principle of “pay now, argue later” transcends mere rhetoric; it represents the core of a just, operational, and sustainable construction industry.
Given the potential of AI to expand the global economy, it is unclear who would ultimately be held responsible for wrong decisions taken by a computer that can think and advise on its own. Litigation and arbitration are not the sole avenues for decision-making; construction adjudication must also confront the ethical and technological challenges posed by AI.
The Technology and Construction Court Solicitors’ Association (“TECSA”) acknowledges that AI and Generative AI are permanent fixtures in our landscape, with their application set to grow exponentially in the future. Consequently, adjudicators are required to stay informed about advancements in AI, including its potential advantages and applications, alongside its associated risks and constraints.
Artificial Intelligence possesses significant capabilities in supporting legal decision-making within the realm of construction law. Nevertheless, it is crucial to adopt a balanced and well-informed strategy to guarantee justice, equity, and effectiveness. “…we should be maximising its benefits, whilst taking into account its risks”.
AI is deeply significant in the modern context. In November 2023, even King Charles III has addressed the development of advanced AI which he said is “no less important than the discovery of electricity”
Adjudication frequently necessitates that a judge evaluate the credibility of witnesses, interpret intricate facts, and apply discretion informed by subtle human circumstances—attributes that AI does not possess. The absence of human judgement and empathy would erode public confidence in the judiciary. A core concern is accountability. Only human judges and arbitrators can be held legally and ethically accountable for their rulings. In instances where an AI errs, the issue of who bears responsibility remains unresolved within the framework of current legislation. Issues of responsibility will directly impact enforceability. AI systems are developed using pre-existing data, which may harbour historical biases. If not meticulously overseen, these AI systems risk perpetuating existing injustices and undermining public confidence.
The substitution of human judges raises significant constitutional and ethical dilemmas regarding judicial independence and the essential right to a fair and impartial hearing conducted by a natural person. These constitutional and ethical issues are currently insurmountable. The qualified and limited opinion is that An AI-engine be only used for some aspects of adjudication and not be considered a viable replacement for an adjudicator or arbitrator. Any generated reports must be vetted by suitably qualified personnel. While references have been made to other legal jurisdictions, this opinion is only valid in England and Wales.
Any savings on time or cost will become immaterial when the AI-generated awards are set aside and the process has to be repeated.
Copyright – G. Sarin 2026
Leave a comment