Unlock 75 + LLM Agent Blueprints in One Repo (2025): From Quick Bots to Production-Ready Crews

Clone 75 + free, runnable AI agent tutorials and learn to prototype, scale, and ship voice, RAG, and multi-agent systems—fast, in 2025.

1. Why this repo is on every AI VP’s reading list

  • Breadth. 75 + runnable examples cover the entire agent/RAG stack: single-agent toys, production-grade multi-agent teams, voice pipelines, MCP (Model-Control-Protocol) integrations, and every popular LLM provider. github.com
  • Depth. Each folder is a mini-tutorial with requirements.txt, fully-commented main.py, and step-by-step READMEs—ideal for architecture reviews or onboarding new hires. github.com
  • Community proof. ~35 k GitHub stars (and counting) signal active maintenance and real-world adoption. github.com

2. Cloning & first run (5 min)

git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
cd awesome-llm-apps/starter_ai_agents/ai_data_analysis_agent
python -m venv .venv && source .venv/bin/activate      # optional isolation
pip install -r requirements.txt
python main.py --csv /path/to/marketing_funnel.csv

Inside main.py you’ll see a ReAct-style loop: the LLM writes pandas code, the sandbox executes it, results go back to the LLM for reflection. Swap in Snowflake or DuckDB by replacing the run_code() helper.

3. Tour of the jungle — what lives where

Folder What you’ll learn (2-sentence pitch)
starter_ai_agents/ Single-agent patterns: prompt-only blog-to-podcast TTS, a one-file Meme Generator that puppets a headless browser, Gemini multimodal demo for doc + image reasoning.
advanced_ai_agents/ Tool-using planners: Deep Research Agent chains web-search → notebook auto-summaries; System Architect Agent outputs PlantUML and tests.
autonomous_game_agents/ Pygame, Chess, and Tic-Tac-Toe bots showcasing event-loop reflection and self-play evaluation.
multi_agent_teams/ CrewAI orchestration with dynamic role assignment—for instance, Competitor Intel spawns “Researcher”, “Analyst”, “Writer” workers.
voice_ai_agents/ Real-time streaming with OpenAI TTS ↔︎ Whisper; Customer-Support Voice Agent shows slot-filling plus sentiment escalation.
mcp_ai_agents/ Browser, GitHub, Notion, and Travel MCP demos—agents can read/patch external state through the Model Control Protocol.
rag/ From “Basic Chain” to Autonomous RAG with self-reflection plus CRAG (Corrective RAG) that re-issues queries when confidence drops.

(Folder names are verbatim; open any README for wiring diagrams and API keys.) github.com

4. Deep-dive playbook

Below are cut-down excerpts you can paste into a scratch repo to feel each pattern.

4.1 Starter: AI Data Analysis Agent

from langchain.agents import create_pandas_dataframe_agent, load_tools
from langchain.chat_models import ChatOpenAI
import pandas as pd

df = pd.read_csv("marketing_funnel.csv")
llm = ChatOpenAI(model_name="gpt-4o-mini", temperature=0)

agent = create_pandas_dataframe_agent(
    llm, df,
    tools=load_tools(["python"])
)

question = "Which channel had the lowest CAC in Q1 2025?"
print(agent.run(question))

Why it matters: zero-shot spreadsheet analysis becomes safe—code executes in a jailed interpreter and results feed back for validation.

4.2 Advanced: Deep Research Agent

Key architecture:

  1. Task Decomposer → splits a query into sub-topics.
  2. Retriever Tool (SerpAPI + scholarly) → fetches sources.
  3. Writer → consolidates using a source-grounded prompt.
  4. Critic → runs an “alarm” chain; if hallucination probability > 25 %, loop back.
flowchart TD
    Q(User) --> Decompose
    Decompose -->|sub-q| Search
    Search --> Draft
    Draft --> Critic
    Critic -->|ok| Answer
    Critic -->|revise| Search

Swap out any node—e.g., plug VectorDB into Retriever for private corp data.

4.3 Multi-agent: Competitor Intelligence Team

from crewai import Agent, Crew, Task
from tools import GoogleSearch, PDFReader

researcher = Agent("Researcher", tools=[GoogleSearch()])
analyst    = Agent("Analyst",   tools=[PDFReader()])
writer     = Agent("Writer")

tasks = [
    Task(researcher, "Gather recent funding news on ACME Corp"),
    Task(analyst,    "Extract valuation multiples & growth metrics"),
    Task(writer,     "Draft a 1-page brief with charts")
]

Crew(tasks).run()

Pro tip: orchestrate via event bus (Redis pub/sub). Your VP of Product can subscribe to milestone events without reading every token.

4.4 RAG pattern: Self-Correcting RAG (CRAG)

from langchain.chains import ConversationalRetrievalChain
from langchain.retrievers import SelfQueryRetriever

vector = SelfQueryRetriever.from_llm(
   llm=ChatOpenAI(), 
   vectorstore=my_chroma, 
   metadata_field="confidence"
)

rag = ConversationalRetrievalChain.from_llm(
   ChatOpenAI(), 
   retriever=vector,
   refine_strategy="force_new_search_if_confidence_below:0.2"
)

When confidence drops, the retriever auto-re-queries with an expanded Boolean expression—no extra glue code.

5. Shipping to prod

Concern Cheat-sheet
Local vs Cloud All demos run on CPU; add --model qwen:4b-int4 for Ollama or replace with OpenAI endpoints.
Observability Wrap agent.run() in LangSmith or Helicone; repo includes ready-made JSON logging hooks.
Security Use the MCP Browser agent to enforce domain allow-lists; for RAG, enable strict source quoting to avoid prompt-injection.
Scaling For multi-agent crews, stick a simple priority queue (Redis/ZMQ) in front and deploy workers as Kubernetes jobs.

6. Extending the repo (your next PR)

  1. Pick a blank slot under the right folder.
  2. Fork, branch, copy the template_project/ scaffold.
  3. Update README.md—keep the Getting Started section consistent.
  4. Test with pytest -q (yes, each tutorial has smoke tests).
  5. Open a PR with a 30-sec Loom demo link; maintainers merge fast.

Happy hacking!

Cohorte Team

June 12, 2025