CHICAGO – The construction industry has a reputation for lagging behind in adopting new technologies. This isn’t helped by the industry’s long-standing struggle with declining productivity. These challenges create a perfect storm for disruption, and one of the most promising disruptors is Artificial Intelligence (AI). From design and planning to project management, AI has the potential to revolutionize the entire construction process, including the possibility of robotic workforces.
While these advancements promise greater efficiency, productivity, and safety, they also introduce legal complexities. The biggest question? Who will be held liable when things go wrong with these new technologies?
The Rise of AI
The future of construction is closer than you think – and it is sure to involve the continued integration of AI. This includes robots laying bricks, installing drywall, and pouring concrete – all with exact precision. AI algorithms could also become a safety net to predict and prevent safety hazards before they materialize; they can even allow for buildings with self-regulating energy consumption. In short, this is the future AI promises in the construction industry, and, again, it is not some pipe dream. In fact, various large construction industry companies are leading the charge with AI-powered tools including:
- Predictive Maintenance Systems: Analyzing sensor data from equipment to predict and prevent breakdowns which minimize downtime and costly repairs;
- Safety Monitoring Systems: Utilizing computer vision to detect safety hazards like, potential equipment malfunctions, unauthorized persons in restricted areas, or falling objects in real-time;
- Design Optimization: AI algorithms creating structures that are not only structurally sound, but also, minimize material waste while maximizing energy efficiency.
While the above sounds great in terms of increasing construction productivity, these types of optimizations open up Pandora’s box of legal questions, among others, including: what happens when an AI-controlled piece of equipment injures a worker? Who is liable if an AI-designed building collapses? The list goes on and on.
The Liability Framework
The construction industry has a reputation for taking its time with new methods, especially technological ones. This is evident in the wide range of project scheduling tools used – from basic spreadsheets to intricate scheduling software. From a legal standpoint, this slow adoption has some advantages. Existing risk allocation creates a predictable framework for allocating responsibility among the project stakeholders – owners, contractors, and suppliers.
The arrival of AI, however, throws a wrench into these established risk allocations. Unclear legal territory surrounds who is accountable if an AI-designed building has issues. Is the architect who relied on the AI design liable? Or is it the software developer who created the AI? Perhaps the responsibility falls on the contractor who constructed the design? The question remains – who shoulders the legal burden when AI is involved?
Navigating the AI Era
The legal landscape of AI in construction is murky, but parties can take steps to navigate it. The key factor is how AI will be used on a project. Is the owner requiring AI in the contract, or will lower-tiered contractors, who typically choose their methods, decide to use AI? Here are some things contractors should consider to prepare for this uncertain legal territory:
- Know the Scope: Make sure you fully understand, to the extent possible, how and when AI will be utilized on the Project and by whom. This preliminary consideration is important for various reasons, but most importantly, it provides notice regarding the steps each party can potentially take to protect themselves from liability or work that would need to be performed as a result of AI being utilized. For example, are there extra inspection steps that will need to be taken for work that utilizes AI?
- Transparency: Advocate for clear and explicit contractual clauses defining roles, responsibilities, and risk allocation regarding AI use. For example, if one is a lower-tiered subcontractor that is simply implementing AI provided/directed by an upstream party or designer, the subcontractor would want to make sure, to the extent possible, it is not responsible for defects in the AI or arising out of the use of the AI. Further, take extra care in reviewing specific language regarding warranty requirements for implementation of the AI tool. However, as referenced above, these provisions are highly-dependent upon each specific case wherein the AI is being utilized and the scope of such AI work.
- Indemnification: Likewise, indemnification obligations and rights are dependent on the specific use of AI in each party’s work. Considerations should be taken to push for hold-harmless provision carve-outs that ensure the indemnifying party’s standard indemnification obligations do not include defects caused by or arising out of the AI (to the extent the indemnifying party bears no responsibility and/or decision-making authority for such AI implementation) – this is especially true given the nuanced nature of the use of AI. On the other hand, to the extent, a party contracts with lower-tier vendors utilizing AI, that party should ensure its agreements include a provision calling for the lower-tier party (the vendor utilizing AI) to indemnify upstream parties for defects relating to use of the AI.
- Stay Informed: As noted above, a lot of the issues in relation to AI in construction are highly dependent on what the specific use of AI on the project will be and are constantly developing. Thus, it is imperative for parties to keep abreast of evolving legal developments and any precedents surrounding AI in construction. Additionally, should questions arise regarding specific risk management measures to take in relation to a project or contract, consult with legal counsel.
The impact of AI on construction law and the construction industry as a whole is still unfolding and will likely not stop any time soon. Contractors, suppliers, and legal teams will need to work together to navigate the evolving landscape. While the exact steps for protection remain uncertain, one thing is clear: AI has the potential to significantly improve efficiency and close the industry’s productivity gap. Proactive collaboration will be key in determining responsibility and liability for AI implementation, putting companies ahead of the curve in the years to come.
John E. Sebastian (pictured top left) joined Watt, Tieder, Hoffar & Fitzgerald in 2013 as the managing partner of the firm’s Chicago office.
Brian Padove is an attorney in Watt Tieder’s Chicago office and is licensed to practice law in Illinois, Indiana, and Wisconsin.