Replication is rife. Human beings have been replicating themselves for 300,000 years, various organisms self-replicate and various esoteric branches of science have posited theories related to self-replicating construction materials that get molecular nanotechnologists very excited indeed.
Software code also has streams of self-replicating autonomous efficiency. At the most identifiable level for most of us here are computer viruses, which can replicate their functions, force and footprint depending on how they are first constructed and where they are deployed. More useful are the threads of agentic AI used by software developers to create new, useful, functional, secure software code.
An idea that has been linked to work first started in the 1970s (the so-called quine is one example), generative and agentic AI services are now being aligned to “cut” code for developers. So what knowledge source and learning does AI coding draw upon, it it good at what it does, where are the caveats and concerns… and, as Lawrence Olivier said in Marathon Man, is it safe?
Today Experimentation, Tomorrow Implementation
Today, a number of software engineering teams are trialling agentic AI-based coding tools and large language models to increase development velocity, scope and – hopefully – veracity too, so that we get debugged stronger software applications as a result. It’s an emotionally quite tough time right now for these prototyping mavericks i.e. when the AI code doesn’t work, or skewed derivative services are churned out, engineers will naturally question whether the prompts are broken and whether the issue is them, or the machine?
Fortunately, these frustrations are rarely due to the developer or even the language model’s inherent limitations. More often, the issues arise from how the tools are used. By refining their planning processes, establishing guardrails and incorporating AI into broader workflows, developers can achieve better outcomes and avoid wasted effort. This is the opinion of Yrieix Garnier, VP of product at data monitoring, observability and security company Datadog.
“Many developers expect LLMs to interpret intent like a human collaborator – which, in theory, they should – but they don’t always provide the context, clarity or precision needed for the AI tool to be effective. Different models may vary in how much ‘effort’ they put into generating a response, typically measured in tokens. Asking the model to ‘think harder’ or ‘think longer’ can trigger it to consume more tokens and produce a more detailed answer,” explained Garnier.
It’s Not You, It’s Me
He says that if there is a persistent gap between expectations and output, but it’s unlikely that the issue is related to how hard the model is “thinking” in its own machine language machine brain sense of the word. More often, the problem lies in how the request to the language model (via the agentic AI coding assistant) is framed. Unsuprisingly perhaps, software application developers see better results when they provide a structured description of the problem, specify the desired execution and carefully frame it by including only relevant context.
For example, if the agent needs to reference specific system architecture, it should be explicitly instructed. While the subject of how polite we need to be to AI services agrees that manners and civility are not important, it sounds like systematic specificity does make all the difference
“At the same time, it’s important not to overload the model with excessive information. LLMs have strict context limits, and too much input can introduce unnecessary complexity that degrades performance,” said Garnier. “Even with well-structured prompts, AI agents can introduce errors or deviate from intended solutions. Without proper checks, the cost of fixing these issues may outweigh the benefits of using AI. Establishing guardrails is therefore essential.”
The team at Datadog has been through plenty of software dogfooding (pun coincidentally appropriate, but still used with apologies) in order to work out how these theories, constructs and methodologies apply to its own platform. Garnier suggests that a powerful technique is to incorporate a planning phase before the model generates or modifies any code. After presenting the agent with the problem statement, constraints and expected outcomes, it should be asked to break down its execution plan into steps and explain its reasoning. Reviewing this plan before the agent moves to implementation allows developers to catch flaws early. It also prevents the agent from adding unnecessary layers of complexity or attempting to patch errors by generating redundant scripts or backup code.
Identify & Evaluate Alternatives
“Planning also helps address the challenges posed by context window limits. When models run out of context, they often summarize the conversation into a new window, which can result in valuable details being lost. A distilled implementation plan serves as a compact guide that can be carried forward into a new window, keeping the AI aligned without overextending the context,” he said. “When developers discuss the potential of agentic AI tools and LLMs, they often emphasize their role in managing repetitive tasks or automating routine testing, thereby accelerating delivery. Yet a crucial capability that is sometimes overlooked is the ability to identify and evaluate alternative approaches.”
It sounds like starting from scratch on a complex problem can be costly in terms of both time and effort. Instead of manually prototyping every option, developers can use an LLM to explore different approaches, generate preliminary implementations and highlight trade-offs. These outputs can be benchmarked, allowing teams to focus on the most promising avenues before making a deeper investment.
AI can also serve as a “reviewer” mechanism i.e. developers can present an implementation plan and ask the model to stress-test it, identifying weaknesses or overlooked considerations. This role can raise questions early on, helping teams prepare more thoroughly for stakeholder reviews. By broadening its application in this way, AI shifts from being a tool for speeding up execution to one that expands perspective, supporting faster delivery and better-informed decision-making.
From Code Companion To Workflow Enabler
“Most developers are already familiar with using AI assistants to debug and improve local code. However, things become more interesting when these tools interact with external systems. For example, when updating Terraform configurations [an infrastructure-as-code instruction], having an accurate view of the current environment is crucial. Without this visibility, there’s a risk of introducing conflicts or creating drift between the intended and actual state. AI agents cannot provide this assurance unless they access the right data sources, involving necessary permissions and integrations,” detailed Garnier, in an extended deep dive session on this topic.
With the right setup, AI tools – in the form of assistants, or more sophisticated agents – stop being passive helpers. Garnier is confident that they can “evolve into workflow enablers” that accelerate incident response, reduce time to resolution and ensure operational decisions are grounded in reliable, contextual performance data. As a result, AI transforms from a faster way to write code into a trusted guide for maintaining resilient, observable, and high-performing systems at scale.
A Blossoming Partnership?
The takeaways here are that for AI coding tools to be success, developers need to start treating AI not as a shortcut, but as a collaborator i..e one that excels with the right context, guardrails, and workflows.
As the technology matures, Garner’s parting word on this subject urge us to understand that the most effective developers won’t just use AI to generate code more quickly. Instead, they’ll integrate it throughout the entire software delivery lifecycle, including research, planning, reviewing and maintenance with greater precision. By doing this, AI shifts from a coding assistant to an engineering partner, enabling teams to navigate complexity with confidence.







