Age of Workflow Agent and the tutorial on n8n
my weekend excursion with n8n locally and why you should try it as well
Over the past few weeks, I have been diving deep into agents. Not the sci-fi kind, but something just as transformative—AI agents that work alongside us, automate complex tasks, and integrate seamlessly into our daily workflows.
The conversation in tech circles has shifted. We are no longer obsessing over which large language model is best. That war of benchmarks and tokens-per-second is fading. The real innovation is happening in how we use those models—how we build agents that make them actionable. Especially workflow agents, which can take an English prompt, understand it, and launch a multi-step automation across the tools you already use.
This is where the future is heading. And it is happening fast.
What Is a Workflow Agent?
A workflow agent is like an AI-powered assistant that does more than answer questions. It listens to a natural-language request, interprets it using a large language model, and then performs a series of automated actions. These can span APIs, webhooks, databases, messaging apps, and more.
Think of it this way. Instead of writing a rigid and nerdy command like:
bash
cat > source.txt | grep -i "artifact"
You can now say:
“Find a Magic: The Gathering card that costs two mana and can destroy an artifact or a creature.”
The agent takes this sentence, interprets the intent, converts it into a search query, looks up a database or external API, and then sends the result to a messaging tool like Discord or Slack.
You are not relying on ChatGPT or Perplexity to do the whole job. You are building a custom system that delegates specific pieces of work to different services and ties it all together through a platform like n8n or Zapier.
Why This Matters
We are entering a new software paradigm. One where individuals and small teams can build personalized tools using natural language and a bit of glue code. And the most exciting part is this: the last mile is yours.
No model knows your exact workflow or the friction points in your process. But you do. Workflow agents let you automate that friction away.
Tutorial: Build Your First Workflow Agent
Let me walk you through the basic setup I used to get started. You can follow this as a weekend project, especially if you are curious about LLM automation, developer tools, or just want to take back control from the SaaS jungle.
Who This Is For and What You’ll Learn
The following guide is for builders, tinkerers, engineers, and curious technologists who are excited about large language models but want to go beyond simple prompts. If you’ve ever wished you could automate the tedious parts of your digital workflow, or dreamed of building a smart assistant that actually does things for you, this tutorial is your on-ramp.
Tools knowing them can be helpful
You’ll get the most value from this guide if you:
Have basic familiarity with command line tools (Terminal, PowerShell, etc.)
Know what Docker is, or are willing to copy-paste and learn along the way
Understand what APIs are and have used tools like Postman or cURL before
Have used Discord/Slack/Line’s developer’s platform
Are comfortable reading a bit of YAML or JSON
You do not need to be an AI expert or full-time developer. If you’ve ever spun up a personal project, built a Notion workspace, or written a Python script, you’re more than ready.
What You’ll Build
In this tutorial, you’ll:
Set up a lightweight Linux development environment on your Windows machine
Self-host n8n, a visual workflow builder. n8n also has a large community of users, providing templates and workflows for use cases you have not considered
Connect it to a large language model via Google AI Studio
Expose your local agent to the public internet using Ngrok
Create a workflow that accepts a natural-language query, interprets it using an LLM, looks up relevant information, and sends the result to Slack or Discord
By the end, you’ll have a fully working workflow agent. It listens to your instructions, understands your intent, takes action, and replies in your favorite chat tool. And most importantly, you'll understand how to control it, extend it, and keep costs under control. Some tutorials on YouTube have a less complicated setup, but this tutorial is meant to be robust and extensible, giving you greater control and less maintenance cost in the long run.
This is not just about automation. It’s about reclaiming your time and building tools that actually work for you.
Let’s get started.
1. Prepare Your Development Environment
Start with your machine setup. If you are using Windows, follow this path to gain maximum flexibility for your NVidia GPU acceleration and self-hosted tools.
Step 1: Install WSL2 (Windows Subsystem for Linux)
This gives you a native Linux experience on Windows, with the ability to later run models locally and tap into your GPU.
Step 2: Install Ubuntu via Microsoft Store
Installing Ubuntu as a virtual machine lets you later take advantage of native GPU passthrough, especially if you plan to run LLMs locally through tools like Ollama or LM Studio.
Pro tip: Avoid assuming Conda (Anaconda) is using your GPU efficiently. If you're serious about local inference, use tools built for CUDA and hardware acceleration.
2. Set Up Your Workflow Platform
We will use n8n, a powerful open-source automation tool that supports thousands of integrations.
Step 3: Use Docker Compose to Self-Host n8n
You can run n8n locally for free, which is great for experimentation and hobby projects. Here's a simple Docker Compose snippet to get you started:
yaml
version: "3"
services:
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
volumes:
- ~/.n8n:/home/node/.n8n
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=yourpassword
Run docker-compose up
and you will have n8n running at
http://localhost:5678
3. Add LLM Power, with Cost Awareness
Step 4: Use Google AI Studio for Free LLM Access
Google offers $300 in free credits through AI Studio. It’s a great starting point for building LLM-powered workflows. Once you’ve set up a Gemini model, you’ll be issued an API key—this key is essential for authenticating requests and must be kept secure.
Tip: Never hardcode your API key directly into a workflow or script. Use environment variables or secrets management tools where possible.
Step 5: Monitor Usage and Billing Dashboards
Whether you are using Google AI Studio, Groq, or another provider like OpenAI or Anthropic, be sure to check your usage dashboard regularly—ideally every day during development. These dashboards often include charts showing request volume, costs, latency, and error rates.
This is especially important if you are working with loops, triggers, or agents that run on a schedule. You do not want to accidentally hammer the API with hundreds of unnecessary calls and burn through your credits (or worse, hit a paid tier).
Here’s what to watch for:
Daily request volume
Unexpected spikes in usage
Token or context length growth
Latency and error rates
Some platforms also support alerts if costs exceed a threshold—enable these if available.
4. Make It Accessible
Step 5: Install Ngrok for Public URL Access
Since you are running n8n locally, external services like Google’s Gemini or webhooks from Slack cannot reach it unless you expose it.
Install Ngrok and run:
bash
ngrok http 5678
This gives you a public-facing URL that bridges your local environment to the outside world.
5. Complete the Loop with Messaging Integration
Step 6: Connect the boxes with arrows
Before we push results to Slack or Discord, let’s talk about how you actually build a workflow in n8n. Once n8n is running, you’ll see a visual canvas where you can drag and drop nodes to create your automation. Each node represents a step—like receiving input, calling an API, or sending a message. You start with a trigger node (manual, schedule, webhook, etc.), then connect it to logic nodes like HTTP Request, Function, or Set. You simply click the small circles on each node to draw lines and link them together. This no-code interface is incredibly flexible: you can preview outputs at each step, map data visually, and even insert conditional logic or loops. Think of it like building a smart flowchart—except every box actually runs code behind the scenes. In this current example, we can ask the question about Magic: The Gathering card in n8n console, and get the answer from Google’s Gemini and push the response to Discord. This means we’re relying on the Google LLM for both translating and card finding.
The next step not covered in this tutorial would be to add another box, have LLM to take the original English question to transform into a proper API query only, and then issue an API call to Scryfall to get the result on what possible Magic card Scryfall would match. The reason is that LLM’s knowledge in the real world is not always up-to-date. When new Magic cards get released, the new information might or might not get indexed and stored by Google until a much later time. If you’re trying to find newer cards, newer data, you want the workflow to assign the lookup task to another system, not onto the LLM itself.
Step 7: Push Results to Slack or Discord
Set up a Slack bot or Discord webhook to send the result of your workflow. For example, after processing the LLM query and retrieving search results, you can post the final response to your own chat channel.
Use the built-in nodes in n8n for Slack, Discord, or even WhatsApp via Twilio. Here’s what the whole setup can look like.
6. Cost Control and Shutdown Tips
When you’re done building or testing for the day, make sure to:
Deactivate your workflows in n8n
Click the toggle to set your flow to inactive. This prevents automatic triggers or loops from firing when you are not watching.Stop your Docker container
If you're self-hosting n8n via Docker Compose, shut it down with:
bash
docker-compose down
This ensures you're not consuming compute resources or running reverse proxies like Ngrok in the background.
Disable or rotate unused API keys
If you’re no longer using a service, disable the key via the provider dashboard to avoid misuse or unexpected traffic.
Final Thoughts
You now have the building blocks of a true agent-powered workflow. You write the logic. The LLM provides the embedding and translating. Finally tools like n8n glue everything together.
This is not about replacing people. It is about reclaiming time. About automating the boring parts so we can spend more time on what matters. And for builders, this is a moment of leverage.
If you have been waiting to experiment with LLMs, stop waiting. This setup costs you almost nothing but time and curiosity. You are no longer a passive user of AI. You are now a creator.