FastAgentChat is a professionally architected backend service for a next-generation AI Agent Platform. It enables developers and users to build, manage, and interact with specialized AI agents through both text and voice interfaces.
The system is built on a modular, asynchronous architecture using FastAPI and SQLAlchemy 2.0. This design ensures high performance, maintainability, and scalability.
- FastAPI (Web Framework): High-performance, asynchronous web framework leveraging standard Python type hints for effortless data validation and documentation.
- SQLAlchemy (ORM): Advanced ORM utilizing the latest 2.0 syntax, with a clean separation between database models and API schemas.
- Alembic (Migration System): Version control for the database schema, allowing for seamless structural updates.
- OpenAI Integration: Deep integration with cutting-edge AI models:
- GPT-4o: For intelligent, context-aware conversations.
- Whisper (STT): For high-fidelity speech-to-text transcription.
- OpenAI TTS: For natural-sounding text-to-speech generation.
- Pydantic (Data Validation): Robust schema validation for incoming and outgoing data, ensuring system stability.
The database is structured to support complex agent configurations and conversation tracking:
- Agents: Stores agent identities, custom system prompts, and voice profiles.
- Conversations: Manages session history (extensible for multi-user chat).
- Messages: Maintains a full audit log of interactions (text and audio).
Logic for third-party API interactions is isolated in the services/ directory. This decouples the AI implementation from the API delivery layer, allowing for easy transitions between different AI providers in the future.
- Async File Streams: Uses
aiofilesand FastAPI'sUploadFilefor non-blocking file handling. - Static Asset Serving: Automatically mounts a
media/directory for hosting generated voice assets.
- Automated Testing: Comprehensive test suite using
pytestandhttpx. - OpenAI Mocking: Tests are designed to run independently of the OpenAI API using mock objects, ensuring stable CI/CD pipelines.
- Python 3.8+
- OpenAI API Key
- Navigate to the project root.
- Install dependencies:
pip install -r requirements.txt
- Ensure your
.envcontains:DATABASE_URL=sqlite:///./fast_agent_chat.db OPENAI_API_KEY=sk-...
Launch the development server:
uvicorn app.main:app --reloadDocumentation is auto-generated at http://localhost:8000/docs.
| Method | Endpoint | Description |
|---|---|---|
POST |
/agents/ |
Register a new AI Agent with a custom prompt. |
GET |
/agents/ |
List all registered agents. |
GET |
/agents/{id} |
Retrieve detailed agent configuration. |
PATCH |
/agents/{id} |
Update agent name, prompt, or voice. |
DELETE |
/agents/{id} |
Remove an agent from the platform. |
| Method | Endpoint | Description |
|---|---|---|
POST |
/agents/{id}/sessions |
Start a new chat session for an agent. |
GET |
/agents/{id}/sessions |
List all chat sessions for an agent. |
GET |
/agents/{id}/sessions/{sid} |
Retrieve session details and message history. |
| Method | Endpoint | Description |
|---|---|---|
POST |
/agents/{id}/sessions/{sid}/chat |
Text talk within a specific session. Includes 10-message history context. |
POST |
/agents/{id}/sessions/{sid}/voice-chat |
Voice talk within a session. STT -> LLM -> TTS pipeline. |
- Modular Project Structure: Clean separation of models, schemas, routers, and services.
- Environment Management: Secure configuration using
.envfiles andpydantic-settings. - Database Migrations: Integrated Alembic for structural versioning.
- Scalable Media Support: Dedicated static file serving for AI voice responses.
- Async Everywhere: Leverages Python's
async/awaitfor optimal I/O performance.