Artificial intelligence has become a daily presence in the architecture, engineering and environmental design world, but not in the way headlines sometimes suggest. We’re not handing projects over to algorithms or automating engineering judgment. Instead, the AI we use today plays a more modest — but increasingly valuable — role: It helps us move faster through the early, information-heavy stages of a project, while the most consequential decisions still depend on experience, context and human interpretation.
That balance is something every engineering and design firm is learning in real time. At BL Companies, we’re experimenting with AI where it adds speed or clarity, especially in due diligence and research. But we’ve also found that the places where our work gets most complicated — zoning interpretation, regulatory nuance, project feasibility — remain places where human judgment carries the weight.
The most immediate value of AI is speed. Our teams routinely use it to scan publicly available information, summarize zoning codes, review permitting processes, or pull high-level data about potential development sites. What once required downloading long PDFs, manually highlighting sections, and scavenging through disconnected municipal websites can now be started with a few well-structured prompts.
As one of my colleagues on our internal AI committee put it, AI can “take the napkin sketch to the computer” by giving us a rapid, reasonably accurate sense of whether a site is even worth deeper investigation. In minutes, we can ask whether a use is permitted, whether a drive-thru is allowed, whether wetlands or floodplain issues appear likely, or whether a jurisdiction restricts items like retaining walls in setbacks. For the jurisdictions that maintain searchable, well-organized online codes, this early-stage screening is a real advantage.
But the value comes with an asterisk. These models pull from sources of wildly varying quality. Sometimes we get clean zoning language; sometimes the model pulls from outdated documents, Reddit threads, or Facebook debates. And because the consequences of an incorrect interpretation can be significant — especially for developers about to make financial commitments — every result still needs to be verified.
AI can point us in promising directions, but we don’t make decisions based solely on those answers. And in many cases, the underlying regulations simply aren’t clear enough for an algorithm to interpret.
Engineering design happens in the gray areas. Zoning language isn’t always cleanly written, permitting requirements often conflict, and each jurisdiction has its own unwritten expectations that you only learn through experience.
AI tools don’t do well with ambiguity. They interpret silence in a regulation as certainty. They provide confident answers to questions that ultimately require dialogue, not data mining.
A simple example: On a corner lot, which yard counts as the “front” under the zoning code? If the regulation doesn’t specify how to treat two frontages, the answer can meaningfully change the required setback and determine whether a building footprint is even feasible.
AI can tell you what the written code might imply, but it cannot tell you how the local planner interprets that ambiguity. That interpretation is what matters. In many cases, we still have to pick up the phone and ask a zoning official directly.
Sometimes that conversation confirms the AI-generated interpretation. Other times it overturns it. Either way, the responsibility is ours. We can’t tell a client, “The model said so.” Our professional obligation is to accuracy, not automation.
This is why human judgment remains central. Engineers don’t just look at the code; we look at its intent, its application, its history and its practical implications. AI has no sense of local precedent or political nuance. But as practitioners, we have to weigh all of that each time we decide whether a site works or whether a client should walk away.
What AI is changing is when we can make certain decisions.
Before we had these tools, early due diligence was slower, especially in jurisdictions that resisted answering preliminary questions unless you shared the project address or developer identity — both of which clients often prefer to withhold in the initial stages. We still had to call planners, dig through PDFs, attend pre-application meetings, and sometimes discover deal-breaking constraints later than anyone would like.
Now, we often arrive at those meetings with fewer surprises. By the time we’re in front of a jurisdiction, we’ve already run a quick scan for red flags, reviewed likely permitting steps, and examined past traffic studies or previously filed applications the model turned up.
In many cases, the meeting becomes an opportunity to confirm what we think we know rather than uncovering issues for the first time. That shift matters. It saves clients money, reduces time wasted on unviable sites, and lets us focus our energy on the locations that truly have a chance of moving forward.
But AI hasn’t eliminated the need for careful engineering. Even the best AI output is only as reliable as the sources beneath it — and many jurisdictions still maintain outdated or non-searchable records. In less developed areas, AI’s usefulness drops significantly. The more rural or idiosyncratic the site, the more the process looks like it always has: Call the planner, review the maps, ask colleagues who’ve worked nearby, and build an understanding from direct experience.
The biggest misconception I see in the broader conversations about our industry is the idea that AI will eventually replace judgment. Tools may get faster, interfaces more natural, datasets more complete. But engineering design isn’t just a technical exercise; it’s a series of applied decisions.
You need to know when AI can be trusted, when it can’t, and when to slow down and ask real humans real questions. You need the context to recognize when a setback recommendation “looks off,” or when a model has misunderstood a regulation, or when an unverified answer could jeopardize a multimillion-dollar investment.
Our internal AI committee has already developed informal best practices: Ask better questions, verify every claim, understand the limits of each model, and never confuse a faster workflow with a finished answer. We’re standardizing these learnings so every project team across our disciplines can use AI safely and effectively.
In that sense, the future of AI in engineering isn’t about replacing professionals. It’s about raising the floor of early research so that our people can spend more time on the high-value decisions that actually shape projects.
AI is becoming a powerful assistant. But the accountability — and the judgment — remain human.

