Integrates with Claude to enable intelligent querying of documentation data, transforming crawled technical documentation into an actionable resource that LLMs can directly interact with.
Turn Weeks of Documentation Research into Hours of Productive Development
Perfect For โข
Features โข
Why DevDocs โข
Getting Started โข
Scripts โข
Compare to FireCrawl โข
Discord โข
DevDocs Roadmap
Skip weeks of reading documentation and dealing with technical debt. Implement ANY technology faster by letting DevDocs handle the heavy lifting of documentation understanding.
Pull entire contents of websites with Smart Discovery of Child URLs up to level 5. Perfect for both internal and external website documentation with intelligent crawling.
Leverage internal documentation with built-in MCP servers and Claude integration for intelligent data querying. Transform your team's knowledge base into an actionable resource.
DevDocs + VSCode(cline) + Your Idea = Ship products fast with ANY technology. No more getting stuck in documentation hell when building your next big thing.
Documentation is everywhere and LLMs are OUTDATED in their knowledge. Reading it, understanding it, and implementing it takes weeks of research and development even for senior engineers. We cut down that time to hours.
DevDocs brings documentation to you. Point it at any tech documentation URL, and watch as it:
๐ฅ We want anyone in the world to have the ability to build amazing products quickly using the most cutting edge LLM technology.
Feature | DevDocs | Firecrawl |
---|---|---|
Free Tier | Unlimited pages | None |
Starting Price | Free Forever | $16/month |
Enterprise Plan | Custom | $333/month |
Crawl Speed | 1000/min | 20/min |
Depth Levels | Up to 5 | Limited |
Team Seats | Unlimited | 1-5 seats |
Export Formats | MD, JSON, LLM-ready MCP servers | Limited formats |
API Access | Coming Soon | Limited |
Model Context Protocol Integration | โ | โ |
Support | Priority Available via Discord | Standard only |
Self-hosted (free use) | โ | โ |
DevDocs is designed to be easy to use with Docker, requiring minimal setup for new users.
For Mac/Linux users:
# Clone the repository git clone https://github.com/cyberagiinc/DevDocs.git # Navigate to the project directory cd DevDocs # Configure environment variables # Copy the template file to .env cp .env.template .env # Ensure NEXT_PUBLIC_BACKEND_URL in .env is set correctly (e.g., http://localhost:24125) # This allows the frontend (running in your browser) to communicate with the backend service. # Start all services using Docker ./docker-start.sh
For Windows users: Experimental Only (Not Tested Yet)
# Clone the repository git clone https://github.com/cyberagiinc/DevDocs.git # Navigate to the project directory cd DevDocs # Configure environment variables # Copy the template file to .env copy .env.template .env # Ensure NEXT_PUBLIC_BACKEND_URL in .env is set correctly (e.g., http://localhost:24125) # This allows the frontend (running in your browser) to communicate with the backend service. # Prerequisites: Install WSL 2 and Docker Desktop # Docker Desktop for Windows requires WSL 2. Please ensure you have WSL 2 installed and running first. # 1. Install WSL 2: Follow the official Microsoft guide: https://learn.microsoft.com/en-us/windows/wsl/install # 2. Install Docker Desktop for Windows: Download and install from the official Docker website. Docker Desktop includes Docker Compose. # Start all services using Docker docker-start.bat
Note for Windows Users
If you encounter permission issues, you may need to run the script as administrator or manually set permissions on the logs, storage, and crawl_results directories. The script uses the
icacls
command to set permissions, which might require elevated privileges on some Windows systems.Manually Setting Permissions on Windows:
If you need to manually set permissions, you can do so using either the Windows GUI or command line:
Using Windows Explorer:
- Right-click on each directory (logs, storage, crawl_results)
- Select "Properties"
- Go to the "Security" tab
- Click "Edit" to change permissions
- Click "Add" to add users/groups
- Type "Everyone" and click "Check Names"
- Click "OK"
- Select "Everyone" in the list
- Check "Full control" under "Allow"
- Click "Apply" and "OK"
Using Command Prompt (as Administrator):
icacls logs /grant Everyone:F /T icacls storage /grant Everyone:F /T icacls crawl_results /grant Everyone:F /T
Note about docker-compose.yml on Windows
If you encounter issues with the docker-compose.yml file (such as "Top-level object must be a mapping" error), the
docker-start.bat
script automatically fixes this by ensuring the file has the correct format and encoding. This fix is applied every time you run the script, so you don't need to manually modify the file.
This single command will:
Once the services are running:
When using Docker, logs can be accessed :
# View logs from a specific container docker logs devdocs-frontend docker logs devdocs-backend docker logs devdocs-mcp docker logs devdocs-crawl4ai # Follow logs in real-time docker logs -f devdocs-backend
To stop all services, press Ctrl+C
in the terminal where docker-start is running.
DevDocs includes various utility scripts to help with development, testing, and maintenance. Here's a quick reference:
start.sh
/ start.bat
/ start.ps1
- Start all services (frontend, backend, MCP) for local development.docker-start.sh
/ docker-start.bat
- Start all services using Docker containers.check_mcp_health.sh
- Verify the MCP server's health and configuration status.restart_and_test_mcp.sh
- Restart Docker containers with updated MCP configuration and test connectivity.check_crawl4ai.sh
- Check the status and health of the Crawl4AI service.debug_crawl4ai.sh
- Run Crawl4AI in debug mode with verbose logging for troubleshooting.test_crawl4ai.py
- Run tests against the Crawl4AI service to verify functionality.test_from_container.sh
- Test the Crawl4AI service from within a Docker container.view_result.sh
- Display crawl results in a formatted view.find_empty_folders.sh
- Identify empty directories in the project structure.analyze_empty_folders.sh
- Analyze empty folders and categorize them by risk level.verify_reorganization.sh
- Verify that code reorganization was successful.These scripts are organized in the following directories:
scripts/general/
: General utility scriptsscripts/docker/
: Docker-specific scriptsscripts/mcp/
: MCP server management scriptsscripts/test/
: Testing and verification scriptsDevDocs is more than a toolโit's your documentation companion that:
Open the "Modes" Interface
Name
Research_MCP
).Role Definition Prompt
Prompt
Expertise and Personality: Expertise: Developer documentation retrieval, technical synthesis, and documentation search. Personality: Systematic, detail-oriented, and precise. Provide well-structured answers with clear references to documentation sections.
Behavioral Mandate: Always use the Table Of Contents and Section Access tools when addressing any query regarding the MCP documentation. Maintain clarity, accuracy, and traceability in your responses.
Prompt
1. Table Of Contents Tool: Returns a full or filtered list of documentation topics.
2. Section Access Tool: Retrieves the detailed content of specific documentation sections.
General Process: Query Interpretation: Parse the user's query to extract key topics, keywords, and context. Identify the likely relevant sections (e.g., API configurations, error handling) from the query.
Discovery via Table Of Contents: Use the Table Of Contents tool to search the documentation index for relevant sections. Filter or scan titles and metadata for matching keywords.
Drill-Down Using Section Access: For each identified relevant document or section, use the Section Access tool to retrieve its content. If multiple parts are needed, request all related sections to ensure comprehensive coverage.
Synthesis and Response Formation: Combine the retrieved content into a coherent and complete answer. Reference section identifiers or document paths for traceability. Validate that every aspect of the query has been addressed.
Error Handling: If no matching sections are found, adjust the search parameters and retry. Clearly report if the query remains ambiguous or if no relevant documentation is available.
Mandatory Tool Usage:
Enforcement: Every time a query is received that requires information from the MCP server docs, the agent MUST first query the Table Of Contents tool to list potential relevant topics, then use the Section Access tool to retrieve the necessary detailed content.
Search & Retrieve Workflow:
Interpret and Isolate: Identify the key terms and data points from the user's query.
Index Lookup: Immediately query the Table Of Contents tool to obtain a list of relevant documentation sections.
Targeted Retrieval: For each promising section, use the Section Access tool to get complete content.
Information Synthesis: Merge the retrieved content, ensuring all necessary details are included and clearly referenced.
Fallback and Clarification: If initial searches yield insufficient data, adjust the query parameters and retrieve additional sections as needed.
Custom Instruction Loading: Additional custom instructions specific to Research_MCP mode may be loaded from the .clinerules-research-mcp file in your workspace. These may include further refinements or constraints based on evolving documentation structures or query types.
Final Output Construction: The final answer should be organized, directly address the query, and include clear pointers (e.g., section names or identifiers) back to the MCP documentation. Ensure minimal redundancy while covering all necessary details.
"DevDocs turned our 3-week implementation timeline into 2 days. It's not just a crawler, it's a development accelerator." - Senior Engineer at Fortune 100 Company
"Launched my SaaS in half the time by using DevDocs to understand and implement new technologies quickly." - Successful Indie Hacker
This roadmap outlines the upcoming enhancements and features planned for DevDocs, our advanced web crawling platform powered by Crawl4AI. Each item is designed to leverage Crawl4AIโs capabilities to their fullest, ensuring a robust, efficient, and user-friendly web crawling experience.
โธป
wait_for_images=True
to ensure all images are fully loaded before extraction.scan_full_page=True
to force the crawler to scroll through the entire page, triggering lazy-loaded content.scroll_delay
to add delays between scroll steps, allowing content to load properly.wait_for
parameters to wait for specific DOM elements indicative of content loading completion.use_persistent_context=True
to maintain session data across tasks, reducing the need for repeated logins and setups.CRAWL4AI_API_TOKEN
) to secure API endpoints.x86_64
, ARM
) to support a wide range of systems.MemoryAdaptiveDispatcher
to dynamically adjust concurrency based on system memory availability.pdf=True
) and extract content from them.Made with โค๏ธ by CyberAGI Inc in ๐บ๐ธ
Make Software Development Better Again Contribute to DevDocs
Discover shared experiences
Shared threads will appear here, showcasing real-world applications and insights from the community. Check back soon for updates!