Deploy GHL MCP

Deploy and Host GHL MCP with Railway

Deploy GHL MCP

GoHighLevel-MCP

boostfunnel/SSE-GoHighLevel-MCP

Just deployed

# Deploy and Host GHL MCP on Railway

GHL MCP is a Model Context Protocol server that enables AI assistants like Claude and ChatGPT to interact with GoHighLevel's CRM, calendar, and automation features through a standardized tool interface.

## About Hosting GHL MCP

Hosting GHL MCP on Railway provides a production-ready bridge between AI platforms (like ElevenLabs Agents, Claude Desktop) and GoHighLevel's API. The server exposes SSE endpoints for real-time communication and REST endpoints for tool execution, enabling AI agents to manage contacts, appointments, conversations, and automations. Railway handles the infrastructure complexity while providing automatic SSL, scaling, and monitoring, making it ideal for agencies running AI-powered booking systems and customer service automation.

## Common Use Cases

- **AI Phone Agents**: Enable ElevenLabs voice agents to book, reschedule, and cancel appointments directly in GoHighLevel
- **Automated Customer Support**: Connect chatbots to access customer history, create tickets, and send follow-up messages
- **Smart Booking Systems**: Build AI assistants that check calendar availability, manage conflicts, and optimize scheduling
- **Lead Management Automation**: Auto-qualify leads, update contact records, and trigger workflows based on AI conversations
- **Multi-channel Communication**: Send SMS/email confirmations and manage conversations across channels via AI

## Dependencies for GHL MCP Hosting

- **Node.js 18+**: Runtime environment for the server
- **GoHighLevel API Key**: Authentication for GHL services
- **Location ID**: Your GHL sub-account identifier
- **Express Server**: Web framework for HTTP/SSE endpoints
- **TypeScript**: For type-safe development
- **Axios**: HTTP client for GHL API calls

### Deployment Dependencies

- [GoHighLevel API Documentation](https://highlevel.stoplight.io/docs/integrations/getting-started)
- [Railway Platform Account](https://railway.app)
- [MCP Protocol Specification](https://modelcontextprotocol.io)
- [ElevenLabs Agents Documentation](https://elevenlabs.io/docs/agents-platform/customization/tools)
- [Original GHL MCP Repository](https://github.com/mastanley13/GoHighLevel-MCP)

### Implementation Details

**Environment Variables Required:**
```bash
GHL_API_KEY=your-api-key
GHL_BASE_URL=https://services.leadconnectorhq.com
GHL_LOCATION_ID=your-location-id
PORT=3000
NODE_ENV=production

SSE Endpoint Structure:

// Capability announcement for AI platforms
app.get('/sse', (req, res) => {
  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache'
  });
  
  // Send available GHL tools
  res.write(`data: ${JSON.stringify({
    type: 'capabilities',
    tools: ['search_contacts', 'create_appointment', 'get_free_slots']
  })}\n\n`);
});

Railway Configuration:

Build Command: npm install && npm run build
Start Command: npm start
Health Check: /
Public Networking: HTTP

Why Deploy GHL MCP on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying GHL MCP on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Foundry Virtual Tabletop
A Self-Hosted & Modern Roleplaying Platform

View Template
(v1) Simple Medusa Backend
Deploy an ecommerce backend and admin using Medusa

View Template
peppermint
Docker-compose port for peppermint.sh