Deploy Function Gemma

Lightweight function calling model based on Gemma 3

Deploy Function Gemma

Just deployed

Function Gemma

ollama/ollama

Just deployed

/root/.ollama

Deploy and Host FunctionGemma AI on Railway

FunctionGemma is Google's specialized function calling model that translates natural language commands into executable API actions. With just 270M parameters, it delivers near-instant latency for on-device agentic workflows, achieving 85% accuracy on mobile action tasks after fine-tuning. Running on as little as 550MB of RAM, it's perfect for privacy-focused, local-first applications that need reliable function calling without sending data to external services.

About Hosting FunctionGemma

Hosting FunctionGemma provides access to a lightweight, specialized model designed as a foundation for building custom function calling agents. This deployment handles natural language to API translation, tool selection, and structured function call generation. With its compact size and efficient architecture, it enables developers to create fast, private agents that execute commands locally, from smart home controls to mobile system actions, while maintaining complete data privacy.

Common Use Cases

  • Smart Home Automation: Build voice-controlled home automation systems that execute commands locally without cloud dependencies
  • Mobile Device Agents: Create on-device assistants that control system functions like calendar events, reminders, and settings
  • API Orchestration: Develop intelligent systems that route requests between local and remote functions based on complexity
  • Edge Computing Applications: Deploy lightweight agents on IoT devices and edge hardware for instant command processing
  • Custom Workflow Automation: Build specialized task automation tools fine-tuned for specific business processes
  • Privacy-First AI: Run function calling agents on your own infrastructure with total data privacy and offline capability

Dependencies for FunctionGemma Hosting

  • Ollama Runtime: Serves the FunctionGemma model through standardized API endpoints
  • Authentication Proxy: Secures access to your function calling service
  • Function Orchestration: Handles tool definition management and structured call generation

Deployment Dependencies

Implementation Details

This template is a distro of the Ollama API template but comes pre-configured with FunctionGemma.

Usage example:

POST /api/chat

Headers: 
    Authorization: Bearer your-api-key  
    Content-Type: application/json

Body: {
  "model": "functiongemma",
  "messages": [
    {
      "role": "developer",
      "content": "You are a model that can do function calling"
    },
    {
      "role": "user",
      "content": "Turn on the living room lights"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "control_lights",
        "description": "Control smart home lighting",
        "parameters": {
          "type": "object",
          "properties": {
            "room": {"type": "string"},
            "action": {"type": "string"}
          },
          "required": ["room", "action"]
        }
      }
    }
  ]
}

The model uses a decoder architecture optimized for function calling rather than general conversation. It's designed to be fine-tuned for specific use cases to achieve production-ready accuracy. FunctionGemma supports single-turn and parallel function calling out of the box, generating structured function calls that can be parsed and executed by your application.

Why Deploy FunctionGemma on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying FunctionGemma on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

Function Gemma

ollama/ollama

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

View Template
openui
Deploy OpenUI: AI-powered UI generation with GitHub OAuth and OpenAI API.

View Template
firecrawl
firecrawl api server + worker without auth, works with dify