NEBULUM.ONE

Building AI Agents: Classifiers & Routers

In this tutorial we’ll explore how AI agents make decisions about what tools to use.

hardware and software development company We build with heart

A Guide to Building AI Agents That Can Complete Tasks

In today’s tutorial I wanted to demystify how AI agents actually make decisions and how they know what tool or tools to use in any given case.

So at a very basic level, anytime you use an LLM for basic conversational use or research, you’re using a non agentic or single-function tool. The AI simply knows you’re expecting a text-based response to whatever question you asked. So your input goes into the AI, it does some computation under the hood, and then returns an answer.

But what if you wanted this AI to become agentic or a function calling agent— meaning it could do more than one thing, more than just give you text responses. For instance, perhaps it could generate an image for you, send an email, take a picture, or turn up the heat. And what if we went even further — for example, what if you set up the logic so that you didn’t even need to ask it to do these tasks? What if it could act autonomously based on what’s happening in the world around it?
This is where task triggers come in. These are the things that happen that let an AI agent know when to act:

User-triggered tasks: You explicitly ask it to do something — “send an email” or “check the weather.”

Environment-triggered tasks: The AI monitors conditions and acts automatically — if it’s cold, turn up the heat; if it’s 8am, send your morning briefing and so on.
Event-triggered tasks: Something happens in the world that causes action — a package is delivered, so notify you; a meeting starts soon, so send a reminder.
So AI agents need to know not only what to do, but also when to do it. And this is where it gets really interesting…”

As builders, designers, and engineers, we need to first plan what we’re trying to achieve. What’s our goal as the system architect and what triggers exist along the path towards that goal that we need to consider? Personally, I like to map this out on paper or in a flow diagram first, because it helps me visualize the logic I’m building toward. More importantly, it helps me spot the questions that I might not have the answers to yet.

For example, if we decide to create an agentic AI with a single interface where users can both ask questions and request tasks to be completed, I can start visualizing the architecture of that application.

Let’s keep things really simple. Let’s imagine we want our AI to be able to perform two tasks…”

1. Respond to questions
2. Write and execute code to solve problems

The Classifier and The Router

Immediately, I can see we need some type of decision-making logic—perhaps a classifier and a router. When the user enters their input, they may have asked a question, or they may have requested a coding task to be completed. If they requested the completion of a task that might require tools or in our case, at least an AI model optimized for the coding challenge, we need to figure out which task they wanted completed and what tools we’ll need to complete that task.

And so this is the core challenge: how does the AI know what to do?

So this is where classification and routing comes in. And keep in mind…—there’s no single “right” answer in terms of how to deal with this. The classification and routing method you choose depends entirely on your specific use case.

Let me walk you through the options and show how I might think about this:
To show you how to do this using a range of tools, I’ll start with the simplest option: keyword matching using JavaScript. Then I’ll demonstrate more advanced cases using Bubble, a no-code tool that helps design enormously complex AI routing logic. We actually have a full Bubble course that covers using APIs, AI, backend workflows, and building agents—I’ve linked it below if you’re interested in taking your knowledge of this topic deeper.

But for now, let’s jump into the code editor.

Keyword Matching

Okay, so the cheap and easy way to do this is through keyword matching. This will only take me a moment to show you with code, and even if you’re a no-coder or low-coder, it’s helpful to see how this works—this is how no-code tools like Bubble or Noodl operate under the hood.

So let’s imagine that we have a user input. We’ll call this variable “userPrompt“. This will hold the question the user asks. I’ll just type in something like “Can you write code that generates a random number?

Next, we have another variable we’ll call “keyword” that will hold the word we want to search the user prompt for. Let’s imagine here that we want to search the prompt to see if the user wants us to code anything. So we’ll enter the word “code” here.

Next, all we need to do is check to see if the word was used. There are many different ways to do this, but for simplicity’s sake, let’s just use an if statement that says: “if the userPrompt includes the word contained within the ‘keyword’ variable, then do this.”

Normally, what we would want to do here is call the function to create the code, but in our example right now, I just want to show you that we can detect whether the word exists or not. This way, you can see how the system knows if the word was found, which allows it to determine which function to call.

So I’ll just console log: “Yes, the word ‘code’ was found in the user prompt. Execute code function.” Then I’ll create an else statement that says: “No, the word ‘code’ was not found in the user prompt. Execute conversation function.”

Currently, because the keyword is set to “code” and the word “code” is found in the user prompt, our console is showing that we need to trigger the coding function. However, if I changed my user prompt to something different that doesn’t use the word code, you can see that our if statement triggers the none code agent response.

Now, keyword matching can be as simple or as complex as you like. You can use conditional operators (AND and OR) to string together a series of words or word combinations that can be grouped together. Alternatively, you could use regex to find keyword patterns.

The reason keyword matching is frequently used in agentic development is because it’s fast and practically free. You don’t need to make any additional API calls to AI agents to extract context from a user prompt—just some simple JavaScript under the hood to do the search for you.

However, this approach can be brittle if user prompts are unclear or ambiguous. For example, “What’s the area code of New York?” and “What’s the dress code for a job interview?” would both trigger the branching logic to hit the code agent.
API Powered Classification AI.

A more nuanced approach, therefore, would be to use API-powered classification AI. We simply ask the agent if the user “wants to have a question answered or write code.” We send this to the AI as an input and ask the AI to classify the intent.
The system prompt I’ll use here looks something like this: “You are a helpful classification assistant. Determine whether the user is requesting code generation or asking a general question. If the user requests code, output: code. Otherwise, output: conversation.”

This is dead simple—you literally just ask the AI: “Does this user want a response to a conversation, or do they want to generate code?” Essentially, this places all user queries in the appropriate bucket for routing later on.

The upside to this approach is that it uses natural language to handle nuance and false positives brilliantly, but the downside is that it requires additional API calls, which cost money and add a little latency to your application.

Also, as a side note here, I’m using an LLM template that we built at Nebulum. It’s designed for founders who want to create custom AI platforms without starting from scratch. If you’re interested, I’ve linked it below—it’s fully customizable, so you can adjust the colors, enable RAG, fine-tune AI models, build multi-agent systems, and more. The template includes full documentation to help you get started. It’s saved us a ton of time on projects, so I figured it might be useful for some of you.

Function Calling

So once we’ve used AI to classify the user prompt, we can now route it to the appropriate function. I’m going to use Bubble to do this, and while there aren’t functions in Bubble like there are in JavaScript, there are workflows, which are conceptually very similar.

Essentially, we can think of these as pillars of logic. In this first pillar, I can create the logic for when the classifier marks the user prompt as a conversation. Similarly, if the classifier marked it as code, we can trigger the coding agent. What we’ve done here is given ourselves the ability to offload the work to the optimized AI. For example, the coding agent we use might be a completely different model than the conversational model.

And not only that, but now we can really do whatever we want within these pillars. Keep in mind, we can also have as many pillars as we want. We could have a workflow dedicated to fetching real-time financial data, another for sending emails, another for summarizing today’s news, and so on.

Based on the complexity of the task and what needs to be done, these could be either custom events or backend workflows. In Bubble, for example, you could just create a folder called “agents” and then create new custom events for each task that needs to be completed. Here, for example, you can see that I have one conversational workflow and one coding workflow. This allows me to optimize each pillar for the task at hand.

And we can trigger each pillar by setting up some conditional logic. For example, notice here that I’m only triggering my conversational workflow (or function) when the classifier AI marked the user prompt as a conversation. Similarly, I’m only triggering the coding job when the classifier AI marked the user prompt as code.
There’s a lot more we could talk about when it comes to agentic AI, especially around setting up and sending parameters to individual tools. But that’s beyond the scope of this tutorial. For now, I just wanted to show you different ways to classify and route user queries toward specific tools.

So that’s all I have for you today. If you found this tutorial helpful and want to stay up to date with advances in the AI space, be sure to subscribe to our channel. Similarly, if you want to take your knowledge of building deep tech platforms further, check out our free newsletter called The Forge. In this newsletter, we explore the bleeding edge of frontier technology and how we as builders can create meaningful tech.

Also, if you’re interested in building AI applications or agentic AI, we have an in-depth online course for do-it-yourselfers and an agency for those who want to delegate the build. Links to both can be found below.

Thanks for stopping by today.

filmmaking editing

Transform Your Business Into a Self-Running System

Build automation architecture, intelligent agents, data infrastructure, and autonomous workflows that eliminate repetitive work. From prototype to scaled systems in weeks, not years.

hardware and software development company We build with heart

LET'S TALK

Discuss Your
Automation Needs
With Us

OUR NEWSLETTER

INDUSTRIES

Financial Services & Fintech

Healthcare & Life Sciences

Logistics & Supply Chain

Real Estate

Professional Services

INDUSTRIES

Manufacturing

Insurance

E-Commerce & Retail

Energy & Utilities

Government

@NEBULUM 2025. ALL RIGHTS RESERVED

Stop doing work that software should handle

Every manual process in your business is a choice to keep paying people for tasks software should handle. We examine how operators build digital workforces that scale without hiring, freeing humans for irreplaceable work.

Check your inbox (or SPAM folder) in about 1 minute. If the email arrived in your SPAM folder be sure to whitelist our email by moving it to your inbox!