homeprojectsblogbookscontactmima

Why did I created another LLM framework?

A good thing that comes from LLMs surge in popularity is that there's a ton of new tools and frameworks for developers to use.


Yet... I've just open-sourced another LLM framework for building LLM apps. Why?


This blog post describes my personal journey creating RΞASON. For a blog post on RΞASON itself and its features check here.

Background

I've been tinkering with LLMs since the GPT-3 API's release May 2020. And in the early days I didn't feel the need for external libraries/frameworks — it seemed straightforward with mostly HTTP calls and simple prompting.


But as the field evolved, new techniques were introduced — such as agents with tools, advanced RAG pipelines and function calling to name a few — and I found myself increasingly doing undifferentiated heavy lifting.


A significant chunk of my time was consumed by tasks like parsing strings to determine which tool the LLM selected and with what parameters, handling potential cases where the LLM passed the wrong parameters, parsing text-based HTTP streams to extract structured output, handling with back-pressure and cancellation in streams, creating robust observability and measures for troubleshooting agent runs.


It was a ton of undifferentiated work that felt boring doing the every single time in a new project. Around March I realized that using a framework would help. Yet, and to my surprise, existing LLM frameworks seemed to focus elsewhere — offering pre-made prompts, adaptors to vector databases, ready-to-use tools, and data ingestion solutions. While these features are good for beginners, they weren't addressing my needs. This is when I started to think about RΞASON.


RΞASON...?

RΞASON is a minimalistic backend open-source Typescript framework for building great LLM apps.


I had three goals in mind when creating it:


  • Only do the undifferentiated heavy lifting. No pre-made prompts or pre-made agents, no vector DBs integrations, etc. That's all up to the developer.
  • Push to the actual limit the API design in order to create the absolute best experience for a Typescript AI engineer. As you see below, RΞASON has a extremely unique design.

RΞASON's unique design

RΞASON interops with your code itself. For instance, check out how you'd get structured output in RΞASON:



You create a normal TS interface describing what you want and just pass that as a generic to the reason() function. RΞASON then uses that interface to inform the LLM what object to return. This feels like magic.


import { reason } from 'tryreason' interface Joke { /** Use this property to indicate the age rating of the joke */ rating: number; joke: string; /** Use this property to explain the joke to those who did not understood it */ explanation: string; } const joke = await reason<Joke>('tell me a really spicy joke') // `joke` will be: { "joke": "I'd tell you a chemistry joke but I know I wouldn't get a reaction.", "rating": 18, "explanation": "This joke is a play on words. The term 'reaction' refers to both a chemical process and a response from someone. The humor comes from the double meaning, implying that the joke might not be funny enough to elicit a response." }

RΞASON also uses the JSDoc comments as the prompt for each property.


Another example of RΞASON interoping with your code is when creating an agent.


Outcome

To be honest, I'm pretty happy with the API design of RΞASON. There are some questionable choices I made — such as RΞASON being also a HTTP server lol —, however, the core design of using interface & JSDoc comments, to me, feels like magic (and in a good way!).


My number one goal was to create the absolute best experience possible for a Typescript AI engineer. And, honestly, I considered that I achieved that.


For LLM frameworks that actually want to grow in terms of adoption though, I'm not sure using RΞASON's design is a good tradeoff because you immediatly lose the JS market & gain the requirement of needing a compiler/transpiler.


Maybe an already established framework like Next.js could make the tradeoff. But that's reserved to only a few players in the space.




Going back to the original question though: why did I create another LLM framework?


While I did set some goals, my main motivation was fun & curiosity, basically. Yeah.