mirror of
https://github.com/xtekky/gpt4free.git
synced 2026-04-22 15:47:11 +08:00
2.1 KiB
2.1 KiB
SKILL.md
Using gpt4free as an LLM Server for Bots (Clawbot/OpenClaw)
Overview
This skill covers running gpt4free as a local LLM server with an OpenAI-compatible REST API, custom model routing (config.yaml), and integration with bots like Clawbot or OpenClaw.
Best Practices
- Start the API server with:
python -m g4f --port 8080(or useg4f api --debug --port 8080) - Use the
/v1endpoint for OpenAI-compatible requests (e.g., POST tohttp://localhost:8080/v1/chat/completions) - Define custom model routes in
config.yamlto aggregate/fallback across providers - Place
config.yamlin your cookies directory (e.g.,~/.g4f/cookies/config.yaml) - For Clawbot/OpenClaw, patch their config to point to your gpt4free server (see
patch-openclaw.py) - Test with:
g4f client "Hello" --model openclawor Python client
Common Pitfalls
- Not starting the server before connecting bots
- Incorrect config.yaml path or syntax errors
- Missing required Python dependencies (install with
pip install -r requirements.txt) - Not exposing the correct port (default 8080)
- Forgetting to patch bot configs to use your local endpoint
Workflow Steps
- Install and set up gpt4free (see README)
- Start the API server:
python -m g4f --port 8080 - (Optional) Create or edit
config.yamlfor custom model routing:models: - name: "openclaw" providers: - provider: "GeminiCLI" model: "gemini-3-flash-preview" condition: "quota.models.gemini-3-flash-preview.remainingFraction > 0 and error_count < 3" - provider: "Antigravity" model: "gemini-3-flash" - provider: "PollinationsAI" model: "openai" - Patch your bot config (e.g., OpenClaw) to use
http://localhost:8080/v1as the base URL (seescripts/patch-openclaw.py) - Start your bot and verify it connects to gpt4free
- Monitor logs and test with the Python client or CLI