Introduction

You don’t need a $600 Mac Mini sitting under your desk to run a powerful AI agent.

All you need is a $5 Linode instance, Ubuntu, and the right stack:

  • OpenClaw (AI agent framework)
  • Ollama (LLM runtime)
  • Kimi 2.5 (Cloud model)
  • Telegram integration for remote control

In this guide, you’ll deploy a fully functional AI agent in the cloud that you can interact with from anywhere securely, persistently, and without depending on OpenAI APIs.

Architecture Overview

Telegram App

Telegram Bot API

OpenClaw Gateway (Linode)

Ollama (localhost:11434)

Kimi-K2.5 (Cloud LLM)

Response back to Telegram

Step 1: Create and Prepare Your Linode Server

Deploy Linode

  1. Go to Linode Dashboard
  2. Create → Linode
  3. Choose Ubuntu 22.04 LTS
  4. Select region
  5. Add SSH key
  6. Deploy

Initial Server Setup

SSH into your server:

ssh root@YOUR_SERVER_IP

Update packages:

apt update && apt upgrade -y

Install essential utilities:

apt install -y curl wget git build-essential

(Optional but recommended) Create a non-root user:

adduser aiuser
usermod -aG sudo aiuser
su - aiuser

Step 2: Install Ollama

Ollama will handle the LLM execution.

curl -fsSL https://ollama.com/install.sh | sh

Verify installation:

ollama --version

Step 3: Run Kimi 2.5 via Ollama Cloud

Launch the model:

ollama run kimi-k2.5:cloud

If you see:

could not connect to ollama server

Start Ollama:

ollama serve

Run in background:

nohup ollama serve > ollama.log 2>&1 &

Verify API:

curl http://localhost:11434/api/tags

You should see:

kimi-k2.5:cloud

Step 4: Install OpenClaw

Install using the official script:

curl -fsSL https://openclaw.ai/install.sh | bash

Verify:

openclaw --version

Step 5: Onboard OpenClaw

Run onboarding wizard:

openclaw onboard --install-daemon

This will:

  • Install the OpenClaw Gateway
  • Configure authentication
  • Set up daemon mode
  • Allow channel configuration

Check gateway status:

openclaw gateway status

Open dashboard locally:

openclaw dashboard

Default URL:

http://127.0.0.1:18789

Step 6: Configure OpenClaw to Use Ollama (Kimi 2.5)

To update the Ollama as the model provider, run the following command and it will update the config file automatically with the selected model deal.

ollama launch openclaw

To update your OpenClaw configuration manually (usually under ~/.openclaw/):

[model]
provider = "ollama"
model = "kimi-k2.5:cloud"
endpoint = "http://localhost:11434"

Restart gateway:

openclaw gateway restart

Your AI agent is now powered by Kimi 2.5.

Step 7: Add Telegram Integration

OpenClaw supports Telegram as one of the communication channels, allowing you to control your AI agent remotely.

Instead of manually configuring tokens and chat IDs, follow the official Telegram setup guide provided by OpenClaw:

👉 Official Telegram Setup Documentation:
https://docs.openclaw.ai/channels/telegram

That guide walks through:

  • Creating a Telegram bot
  • Generating a bot token
  • Configuring the OpenClaw Telegram channel
  • Enabling secure message routing

Once configured, restart the gateway:

openclaw gateway restart

Now you can send messages to your Telegram bot and interact directly with your AI agent.

Step 8: Run Ollama as a Systemd Service (Production Mode)

Create service file:

sudo nano /etc/systemd/system/ollama.service

Paste:

[Unit]
Description=Ollama Service
After=network.target[Service]
ExecStart=/usr/local/bin/ollama serve
User=aiuser
Restart=always
RestartSec=3[Install]
WantedBy=multi-user.target

Enable service:

sudo systemctl daemon-reload
sudo systemctl enable ollama
sudo systemctl start ollama

Security Best Practices

  • Enable UFW firewall ufw allow ssh
    ufw enable
  • Do NOT expose port 11434 publicly
  • Keep Ollama bound to localhost
  • Rotate Telegram tokens if needed
  • Regularly update your server

Why This Stack Is Powerful

With this setup, you get:

  • A cloud-hosted AI agent
  • No paid LLM dependency
  • Telegram remote control
  • Persistent daemonized runtime
  • Scalable infrastructure
  • Extremely low hosting cost

All running on a simple Linode instance.

You don’t need expensive hardware.
You don’t need a Mac Mini.
You just need a small Linux server and the right architecture.

Conclusion

Running OpenClaw on a $5 Linode instance with Kimi 2.5 via Ollama Cloud gives you a production-ready AI agent environment that is:

  • Cost-efficient
  • Fully controllable
  • Cloud-accessible
  • Telegram-enabled
  • Scalable

If you’re building autonomous AI systems, trading bots, research agents, or DevOps assistants, this architecture provides a strong, modern foundation.

Written By
Fareeth John

I’m an Enterprise Architect at Akamai Technologies with over 15 years of experience in mobile app development across iOS, Android, Flutter, and cross-platform frameworks. I’ve built and launched 45+ apps on the App Store and Play Store, working with technologies like AR/VR, OTT, and IoT.

My core strengths include solution architecture, backend integration, cloud computing, CDN, CI/CD, and mobile security, including Frida-based pentesting and vulnerability analysis.

In the AI/ML space, I’ve worked on recommendation systems, NLP, LLM fine-tuning, and RAG-based applications. I’m currently focused on Agentic AI frameworks like LangGraph, LangChain, MCP and multi-agent LLMs to automate tasks

Leave a Reply

Your email address will not be published. Required fields are marked *