Disclaimer/ Edit: This post was heavily “polished” and rewritten with AI. The content is mine and true but I’m sure you’ll detect the AI patterns in this one :).
The Problem with Traditional DBA Operations
If you’ve worked in enterprise SQL Server long enough, you know how migrations usually go.
A request comes in to migrate an environment. Someone starts gathering specs on the current servers like CPU, memory, storage, utilization, SQL configuration, clustering details, high availability requirements, service accounts, networking, dependencies and all of it gets pulled together manually. Then infrastructure gets involved. Servers get provisioned. Someone follows a runbook. SQL gets installed. Best practices get applied. Always On gets configured. Migration prep gets done. Validation happens. Documentation gets updated.
And somewhere in the middle of all that, there’s usually a heavy dependency on one or two senior engineers or DBAs who know exactly how everything is supposed to be built.
That’s been normal for a long time.
But over the past several weeks, I’ve been leading a project that’s fundamentally changed how I think about database engineering operations and database administration all together.
We’ve been building what I’d describe as an AI-assisted, pipeline-driven operating model for SQL Server migrations and environment provisioning. The goal wasn’t to create a flashy chatbot or automate one isolated task. The goal was to rethink the entire operational flow from initial server request all the way through a fully configured dev, test, and prod SQL Server environment and more importantly, make it repeatable, standardized, and dramatically less manual.
Our Breakthrough Wasn’t AI, It Was Structure
A big part of what made this possible was an internal orchestration platform built by our developer experience team.
The orchestration platform isn’t the AI model itself. It’s more like the framework the model operates inside. It gives the model structure through reusable skills, markdown-based instruction files, live build specifications, workflow context, and operational guardrails that help it make much better decisions than you’d get from raw prompting alone.
That distinction matters because, in my experience, AI by itself is not reliable enough for operational engineering. It needs structure. It needs guardrails. It needs context. Once you give it those things, it becomes incredibly powerful.
What made this project successful wasn’t “AI magically writing scripts.” It was creating a system where AI could operate inside well-defined engineering boundaries, with clear expectations, reusable workflows, and operational guardrails that made the outputs consistent and trustworthy.
Step One: AI-Assisted Infrastructure Requests
One of the first workflows we built was an AI-assisted provisioning workflow designed to streamline the server request and sizing process.
A DBA can invoke that command, provide the source SQL Server names, and from there the workflow kicks off discovery scripts that gather infrastructure and SQL configuration details across development, test, and production. It analyzes resource allocations and utilization, looks at workload characteristics, and then generates a properly formatted infrastructure request email with recommended sizing based on what it found.
That alone removed a lot of manual engineering and administration effort which standardized what had historically been a very manual process.
Then We Eliminated YAML Authoring Too
One of the last manual pieces in our process was filling out the deployment build specification YAML file that drives our Azure DevOps pipeline. Even though the downstream deployment was heavily automated, someone still had to manually populate the spec file with all the environment details.
So I built another AI-driven workflow that automatically generates the build specification document used to kick off the deployment pipeline.
Now a DBA can simply say something like, “I want to migrate the SQLXXX servers,” and the workflow responds with a structured prompt asking for the key details we need service accounts, Availability Group name, listener information, cluster information, IPs, and other environment-specific values.
Once that information is provided, the system builds out the full YAML deployment specification for dev, test, and prod automatically. Server names, IP assignments, service accounts, cluster metadata, Availability Group configuration… All of it gets generated in the correct format and becomes immediately usable by our Azure DevOps pipeline.
At that point, the DBA commits the YAML file, opens a pull request, and the pipeline takes over.
That was one of those moments where I stopped and thought, okay… this is becoming something much bigger than automation.
From Pull Request to Fully Built SQL Platform
Once the build specification lands in source control, our Azure DevOps pipeline consumes it and orchestrates the full deployment lifecycle.
Behind the scenes, I’ve developed more than 50 PowerShell scripts leveraging AI and our internal AI orchestration tool to do so, which are organized into logical deployment stages that handle everything from VM setup and cluster creation to SQL installation, post-install configuration, Always On setup, and pre-migration preparation.
That includes:
- full VM configuration
- Windows Failover Cluster setup and validation
- SQL Server installation and configuration
- post-install best practices and tuning
- Always On Availability Group implementation
- pre-migration object preparation
- environment validation and testing
Each stage is modular, testable, and designed around a specific operational responsibility. That separation has made the platform easier to troubleshoot, easier to maintain, and easier to evolve as we continue building.
The first time I watched that whole flow happen end-to-end, from a simple migration request to a fully built YAML spec, committed to source control, triggering an automated deployment that stood up and configured an entire SQL platform, it was one of those engineering moments you don’t forget.
It felt surreal.
What AI Actually Changed
There’s a lot of noise right now about AI replacing engineers.
That’s not what I’ve experienced.
What I’ve experienced is that AI dramatically accelerates engineering velocity when it operates inside the right framework.
It helped me prototype faster, iterate faster, build scripts faster, and operationalize ideas much quicker than I could have on my own.
But every meaningful piece still required engineering judgment such as architecture decisions, testing, validation, troubleshooting, refinement, and careful thought around operational safety.
AI accelerated the work, engineering still made it real, and that’s an important distinction.
What We’re Really Building
What excites me most is that this project is bigger than automation.
It’s about operationalizing database engineering knowledge in a repeatable way.
For years, what made senior DBAs valuable has often lived in experience. Knowing what questions to ask, knowing what best practices matter, knowing how to sequence work, and knowing where mistakes typically happen.
What we’re starting to do is turn that knowledge into structured workflows and systems.
That doesn’t replace engineers or DBAs.
If anything, it raises the importance of good engineering overall, because someone still has to design the architecture, validate outputs, build safeguards, and handle the edge cases.
But it does change what’s possible.
Looking Ahead
I think database engineering and database administration is moving toward the same kind of platform thinking that infrastructure engineering embraced years ago with Infrastructure as Code, CI/CD, and standardized deployment pipelines.
For a long time, database operations have lagged behind that evolution.
I don’t think that will be true much longer.
And honestly, I think we’re just scratching the surface.
What we’re building feels less like a project and more like the beginning of a very different way to run SQL Server operations moving forward.

Leave a comment