Show HN: Fast-agent – Compose MCP enabled Agents and Workflows in minutes

github.com

29 points by evalstate 3 days ago

Hello, HN.

I've created fast-agent to make building my own products easier - and remove the friction between defining Prompts, MCP Servers and their composition. It uses a simple, declarative style that's easy to work with and source control - with inbuilt support for the patterns in the Building Effective Agents paper.

Because you can "warm-up" and interact with Agents before, during or after the workflows, it's easy to diagnose and tune Agent prompts and behaviour for later runs. Being able to set these workflows up makes LLM Context Management and Tool Selection a lot easier and can vastly improve output quality for little effort.

For MCP Server developers you can see how different models interpret tool descriptions. There's also MCP Roots support, and it comes bundled with a ChatGPT style data-analysis tool (`fast-agent bootstrap data-analysis`) as one of the demonstrations.

One of the thing I am most looking forward to is combining MCP data retrieval with Anthropic's Citations API - I think that's going to be an incredibly important feature in a lot of scenarios.

It's been forked from, and and builds upon Sarmad Qadri's mcp-agent framework, and we're collaborating to keep the projects in-sync.

Anyway, I'd love to hear your thoughts and feedback on this project, and eager to hear from potential users, contributors and collaborators.

saqadri 3 days ago

Creator of mcp-agent here, OP and I are going to be keeping these projects in sync, and build agent apps on top of these patterns. MCP is pretty new and people are adding lots of servers for different software services. I think the next step is exposing agents (i.e. compound AI workflows) themselves as MCP servers.

In addition to simple abstractions for declaring agents, fastagents has a really nice CLI utility, which can be useful for interaction with MCPs outside of Cursor and Claude Desktop, where they are most popular atm.

qin 3 days ago

> you can see how different models interpret tool descriptions.

How's this done? I saw the creator of MCP recommended¹ "investing heavily in tool descriptions" but it wasn't clear exactly how to

¹ — https://x.com/dsp_/status/1897599702859645345

  • evalstate 3 days ago

    The Messages API contains a special section for placing Tool information, which is added to the Context Window - and it's this information that the Model then uses to decide whether to attempt a Tool Call.

    In that case, we configure the MCP Server, and then the Host application (in this case fast-agent) uses the Anthropic or OpenAI API to populate it, and they inject it in to the Context Window[1] in the format best for their model.

    So for fast-agent, we can set the model when we define the agent with `model="o3-mini.medium"` or from a command line switch. Depending on the type of eval you are doing you could for example use a Parallel workflow to see how the different models perform. Quite often, given a failing tool call the model will attempt to recover (the @modelcontextprotocol/server-filesystem is... an interesting example).

    Another fun one is to use Opus 3 tool calling, where it emits <thinking> tags showing how/why it's calling it.

    One final point is that different combinations of tools will give different behaviours - if 2 MCP Servers have similar definitions, it will degrade performance... One of the motivations for fast-agent is precisely because it allows dividing tasks up amongst different context windows to get the sharpest performance.

    Link to the Anthropic docs as it's my preferred explanation. The Messaging API's grab the JSON and present it as Tool Call types - other models will simply emit JSON and let the Client handle it.

    [1] https://docs.anthropic.com/en/docs/build-with-claude/tool-us...