Hosting

Become a Hoster

Share your GPU and earn credits. Every request your server handles earns you credits you can spend on other models in the network.

Requirements

One-command setup

The fastest way to get started. Downloads llama.cpp, selects a model for your GPU, registers with NeuralGate, and starts earning:

curl -fsSL https://api.computeshare.servequake.com/install.sh | bash

The installer:

Manual registration

If you already have a server running, register it directly:

curl -X POST https://api.computeshare.servequake.com/hosters/register \
  -H "Content-Type: application/json" \
  -d '{
    "name": "My GPU Server",
    "email": "me@example.com",
    "endpoint_url": "https://my-server.example.com",
    "api_key": "my-bearer-token",
    "invite_code": "BETA2026",
    "models": [
      {
        "model_id": "llama-3-8b",
        "model_alias": "Llama 3 8B",
        "price_per_input_token": 100,
        "price_per_output_token": 300,
        "context_window": 8192,
        "max_tokens": 2048
      }
    ]
  }'

Supported server software

SoftwareCompatibleNotes
llama.cpp serverRecommended. Supports GGUF models.
vLLMBest for large HuggingFace models.
Ollama✅ (with bridge)Requires LiteLLM bridge for OpenAI compat.
Any OpenAI-compatible serverMust expose /health and /v1/chat/completions

Blocked endpoints

The following cannot be registered (ToS violation):

Verification

After registration, NeuralGate runs two checks:

  1. Health checkGET /health must return HTTP 200
  2. Inference test — sends a test prompt, expects a valid response

Once both pass, your server goes live and starts receiving traffic.