Skip to content

OpenClaw on Mac Mini: The Ultimate Always-On Setup

nacre.sh TeamMay 2, 20269 min read

Set up OpenClaw on a Mac Mini for a reliable always-on AI agent. Covers installation, auto-start configuration, local LLM options, and power management.

openclaw mac mini setupopenclaw macosalways on ai agentlocal ai agent

The OpenClaw Mac Mini setup has become one of the most popular configurations in the OpenClaw community. The Mac Mini M4 offers exceptional performance-per-watt (around 8–12W idle, 20–30W under load), enough RAM to run local LLM models via Ollama, and Apple Silicon's neural engine for accelerated inference. Combined with macOS's excellent sleep/wake management, it's an ideal always-on personal AI agent server that costs roughly $2–5/month in electricity.

Why Mac Mini for OpenClaw?

  • Low idle power: 8–12W vs 30–60W for an equivalent Intel/AMD mini PC
  • Unified memory: 16GB or 24GB of fast unified memory, ideal for local LLMs
  • Silent operation: No active cooling needed at light loads
  • macOS stability: Low crash rates, excellent sleep/wake behavior
  • Neural Engine: Hardware acceleration for certain AI inference tasks

Hardware Recommendation

The M4 Mac Mini with 16GB unified memory ($699) is the sweet spot for OpenClaw with cloud LLMs. If you want to run local models like Llama 3.1 70B quantized, the 24GB model is recommended.

Step 1: Install Homebrew and Python

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install python@3.12 git

Step 2: Install OpenClaw

git clone https://github.com/openclaw/openclaw.git
cd openclaw
python3.12 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python -m openclaw onboard

Step 3: Configure Auto-Start with launchd

Create a launchd plist to start OpenClaw automatically at login:

cat > ~/Library/LaunchAgents/sh.openclaw.agent.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>sh.openclaw.agent</string>
    <key>ProgramArguments</key>
    <array>
        <string>/Users/YOUR_USERNAME/openclaw/venv/bin/python</string>
        <string>-m</string>
        <string>openclaw</string>
        <string>start</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>WorkingDirectory</key>
    <string>/Users/YOUR_USERNAME/openclaw</string>
</dict>
</plist>
EOF
launchctl load ~/Library/LaunchAgents/sh.openclaw.agent.plist

Replace YOUR_USERNAME with your macOS username.

Step 4: Configure Power Management

Prevent the Mac Mini from sleeping when idle:

sudo pmset -a sleep 0 disksleep 0

Enable automatic restart after power failure:

sudo pmset -a autorestart 1

Step 5: (Optional) Add Ollama for Local LLMs

brew install ollama
ollama pull llama3.1:8b
ollama serve &

Then in OpenClaw's settings, point to http://localhost:11434 as your Ollama endpoint.

Frequently Asked Questions

What's the annual electricity cost of running a Mac Mini 24/7?

At an average of 10W idle and $0.15/kWh, a Mac Mini running 24/7 costs approximately $13/year in electricity — far cheaper than a cloud VPS.

Should I use nacre.sh instead of a Mac Mini?

If you travel frequently, nacre.sh ($12/month) is more reliable since your Mac Mini's availability depends on your home internet and power. If you're usually home and want local LLM support, the Mac Mini is excellent.

Can I run multiple OpenClaw instances on one Mac Mini?

Yes. Run each on a different port and in a separate virtual environment. A 16GB Mac Mini can comfortably handle 2–3 instances running cloud LLMs.

nacre.sh

Run OpenClaw without the server headaches

Dedicated instance, automatic TLS, nightly backups, and 290+ LLM integrations. Live in under 90 seconds from $12/month.

Deploy your agent →

Related posts