Deploy Hermes Agent Locally on Windows with WSL + Ollama and Connect Telegram

A practical local setup flow for Hermes Agent on Windows: install WSL and Ubuntu, add Ollama and Gemma 4, then complete a basic Telegram connection.

If you want to run Hermes Agent on Windows with as little friction as possible, a practical path is:

  • keep Windows as the host system
  • run Ubuntu inside WSL
  • use Ollama to serve the local model
  • let Hermes Agent connect directly to the local Ollama endpoint

This approach keeps the environment relatively clean, lets you run most commands in a Linux-style workflow, and avoids preparing a separate Linux machine.

Overall flow

You can split the setup into 4 steps:

  1. Enable WSL and install Ubuntu
  2. Install Python, Node.js, Git, and other basics inside Ubuntu
  3. Install Ollama and pull a local model
  4. Install Hermes Agent, then connect Telegram

If your goal is simply to get Hermes Agent running first, by the end of step 3 you are already close.

1. Install WSL and Ubuntu

Run this in PowerShell with administrator privileges:

1
wsl --install

After the installation finishes, restart the PC, then continue with Ubuntu:

1
wsl --install -d Ubuntu

After that, open Ubuntu in WSL. Most of the remaining commands are run there.

2. Update Ubuntu and install the base environment

Update the system first:

1
2
sudo apt update
sudo apt upgrade -y

Then install Python, extraction tools, Node.js, and Git.

Install Python

1
sudo apt install python3-pip python3-venv -y

Install zstd

1
sudo apt install -y zstd

Install Node.js

1
2
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs

Install Git

1
2
sudo apt update
sudo apt install -y git

You can quickly verify the installation with:

1
2
3
node -v
npm -v
git --version

3. Install Ollama and pull Gemma 4

Install Ollama:

1
curl -fsSL https://ollama.com/install.sh | sh

If you want a local model for Hermes Agent, starting with Gemma 4 is reasonable.

For example:

1
ollama run gemma4:e4b

If your machine is weaker, you can also try:

1
ollama run gemma4:e2b

Larger variants include:

1
2
ollama run gemma4:26b
ollama run gemma4:31b

For most normal Windows + WSL setups, gemma4:e4b is usually the more practical starting point.

4. Install and configure Hermes Agent

Install it with:

1
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

After installation, point it to the local Ollama endpoint:

1
http://127.0.0.1:11434

Use the local model name you actually installed, for example:

1
gemma4:e4b

If the installer asks you to refresh the shell, run:

1
source ~/.bashrc

Common Hermes Agent commands

These are the commands you will use most often:

Start

1
hermes

Re-enter setup

1
hermes setup

Configure the chat gateway

1
hermes setup gateway

Update

1
hermes update

Basic Telegram connection steps

If you want Hermes Agent to send and receive messages through Telegram, the core step is still:

1
hermes setup gateway

Then prepare the two Telegram-side items you need:

  • create a bot with BotFather
  • get your User ID with @userinfobot

Once you have those basics, continue filling them into the Hermes Agent gateway setup.

Who this setup fits

This workflow is a good fit if:

  • Windows is your main desktop system
  • you do not want to maintain a separate Linux host
  • you want to get a local Agent running first, then expand to chat platforms
  • you prefer local models instead of depending on cloud APIs

If you mainly want to experience a local Agent rather than build a full production deployment immediately, this path is already practical enough.

A few things to keep in mind

  • WSL is still a compatibility layer, so in extreme cases it may not behave exactly like native Linux
  • whether a large model runs smoothly still depends on your RAM, VRAM, and CPU / GPU
  • gemma4:e4b is a realistic starting point, but actual experience still depends on the machine
  • Hermes Agent platform integration is an extension step; getting the local model path working first, then adding Telegram, is usually more stable

Conclusion

If you want to deploy Hermes Agent locally on Windows with as little friction as possible, the smoother order is:

WSL -> Ubuntu -> Ollama -> Gemma 4 -> Hermes Agent -> Telegram

Get the local model running first, then add the gateway integration. That usually gives you a much higher success rate. For most users, this is easier to troubleshoot than piling on every component at the beginning, and it also leaves room for later expansion.

Original reference

This post is rewritten and organized based on:

记录并分享
Built with Hugo
Theme Stack designed by Jimmy