Cheezy World

Delivering software with discipline and joy

The future of software engineering: My Thoughts - Part 2

This is part 2 in a series where I discuss my thoughts on the Thoughtworks retreat report titled The future of software engineering. If you haven’t read part 1, I’d encourage you to do so first as it provides a lot of context.

In part 2, I will discuss the second and third categories from the report. Let’s jump right in.

2. The middle loop: a new category of work

The report states “The retreat’s strongest first-mover concept. Nobody in the industry has named this yet.” It goes on to say:


The future of software engineering: My Thoughts - Part 1

In February 2026, Thoughtworks released an article titled “The future of software engineering”. It is a great paper and I encourage everyone to read it. The article lists key themes and takeaways from attendees at a retreat they organized. Here is the Executive summary:

Executive summary

Senior engineering practitioners from major technology companies gathered for a multi-day retreat to confront the questions that matter most as AI transforms software development. The discussions covered more than twenty topics across breakout sessions, but the most significant insights didn’t emerge from one single session. Instead, they surfaced at various intersections; we found that the same concerns kept appearing in different conversations, framed by different people solving different problems.


Closing the Loop: How We Added Traceability to Stride

When AI agents review code, their findings shouldn’t vanish into the ether. That was the driving insight behind the review_report feature we shipped in Stride v1.25 and the Stride plugin v1.4 this week. Here’s the story of what we built, why it matters, and how all the pieces fit together.

The Problem

Stride’s task-reviewer SubAgent already did excellent work — it checks code changes against acceptance criteria, verifies pitfall avoidance, validates pattern compliance and alignment of the testing strategy. But the review output lived only in the agent’s conversation context. Once the agent completed the task, the structured findings were gone. Human reviewers opening a task in the Review or Done column had no visibility into what the AI had already checked.


Strides New Enhanced Workflow

Over the past two weeks, a lot of enhancements have been made to the Stride Workflow. I wanted to take the time to detail them here as I think they are a good example of how one can direct an AI agent to perform specific tasks in a repeatable manner.

What is Stride?

For those new here: Stride is a kanban-based task management platform built specifically for Human and AI agent collaboration. It is very different than the typical “let the agents run through a lot of tasks and we’ll check at the end” mode of working with AI. Instead, Stride gives the human (team) multiple places where they can insert themselves into the workflow. Enough of that for now as I want this post to focus on the workflow enhancements.


Teaching AI Agents to Look Before They Leap

The Stride Claude Code plugin just shipped version 1.1, and the headline feature is subagent orchestration — the ability for an AI agent to dispatch specialized sub-agents at key points in the task lifecycle. Instead of one agent doing everything sequentially, the primary agent now coordinates a team: an explorer that reads the codebase before coding starts, and a reviewer that checks the work before tests run.

This post covers what changed and how it works.