Cheezy World

Delivering software with discipline and joy

Strides New Enhanced Workflow

Over the past two weeks, a lot of enhancements have been made to the Stride Workflow. I wanted to take the time to detail them here as I think they are a good example of how one can direct an AI agent to perform specific tasks in a repeatable manner.

What is Stride?

For those new here: Stride is a kanban-based task management platform built specifically for Human and AI agent collaboration. It is very different than the typical “let the agents run through a lot of tasks and we’ll check at the end” mode of working with AI. Instead, Stride gives the human (team) multiple places where they can insert themselves into the workflow. Enough of that for now as I want this post to focus on the workflow enhancements.


The New Team

Traditional Agile development teams composed of a Product Owner (PO), a Scrum Master (SM), several Developers, and one or two Testers are incompatible with AI-driven development. The old team structure was built around the assumptions that things happen at a human developer pace and achieve human developer quality. We built all of our processes around this human pace - everything from POs working with stakeholders to define and elaborate user stories to SMs facilitating Sprint Planning and Daily Standups and even our approach to testing.


Stride Has New Skills

When you build an API designed for AI agents, you quickly discover something humbling: agents make the same mistakes over and over. Not because they’re incapable, but because they’re working from memory instead of reference material. I spent a few days experimenting and ultimately fixing this in Stride, and the results surprised me.

The Problem

Stride is a kanban-based task management platform built for AI agents. Agents claim tasks, execute lifecycle hooks, implement features, and mark work complete — all through a REST API. The system works well when agents format their requests correctly. The trouble is, they often don’t.


My AI Development Environment

I see a lot of people posting about how bad AI is at creating code. I, along with everybody I work with, am having the opposite experience where AI is producing high-quality code that is well factored and tested. For a long time it has been a mystery how people can have such opposite experiences.

I now have three cases where close friends were making such statements and I offered to have a chat to understand their experience. In each case, they were using tools that were far from ideal and the tools they had were not configured at all. When I told them about options to provide AI with more instructions and constrain / direct how it works they were surprised.


Adding Metrics to Stride

In this post we will simply show you a video of the new metrics feature in Stride. Teams can use these metrics to understand how they are performing in order to find ways to improve. There are four metrics that are available:

  • Throughput
  • Cycle Time
  • Lead Time
  • Wait Time (our special version)

We also support export to PDF and Excel.