Everyone has witnessed chatbots in motion; some are spectacular, whereas others are annoying. Nevertheless, what if you happen to might create one that’s genuinely clever, well-organized, and easy to combine with your personal utility? We’ll use two potent instruments on this article to construct a chatbot from scratch:
- Along with LLMs, LangGraph facilitates the administration of organized, multi-step workflows.
- The chatbot may be uncovered as an API utilizing Django, a scalable and clear internet platform.
We’ll start with a short setup, which entails utilizing Pipenv to put in dependencies and cloning the GitHub repository. The chatbot’s logic will then be outlined utilizing LangGraph, a Django-powered API might be constructed round it, and a fundamental frontend might be wired as much as talk with it.
You’re within the correct place whether or not you need to find out how LangGraph works with a real-world backend or if you happen to want to arrange a easy chatbot.
Quickstart: Clone & Set Up the Mission
Let’s begin by cloning the challenge and organising the setting. Be sure you have Python 3.12 and pipenv put in in your system. If not, you’ll be able to set up pipenv with:
pip set up pipenv
Now, clone the repository and transfer into the challenge folder:
git clone https://github.com/Badribn0612/chatbot_django_langgraph.git
cd chatbot_django_langgraph
Let’s now set up all the necessities utilizing pipenv.
pipenv set up
Word: For those who get an error saying you don’t have Python 3.12 in your system, use the next command:
pipenv --python path/to/python
pipenv set up
To know the trail of your Python, you need to use the next command
which python (linux and home windows)
which python3 (mac)
To activate this setting, use the next command:
pipenv shell
Now that our necessities are set, let’s arrange the environment variables. Use the next command to create a .env file.
contact .env
Add your API keys to the .env file
# Google Gemini AI
GOOGLE_API_KEY=your_google_api_key_here
# Groq
GROQ_API_KEY=your_groq_api_key_here
# Tavily Search
TAVILY_API_KEY=your_tavily_api_key_here
Generate a Google API Key from Google AI Studio, Generate a Groq API Key from Groq Console, and Get your Tavily key from Tavily House.
Now that the setup is finished, let’s run the next instructions (ensure you activate your setting)
python handle.py migrate
python handle.py runserver
This could begin the server

Click on on the http://127.0.0.1:8000/ hyperlink the place the applying is working.
Designing the Chatbot Logic with LangGraph
Now, let’s dive into designing the chatbot logic. You may be questioning, why LangGraph? I picked LangGraph as a result of it provides you the pliability to construct advanced workflows tailor-made to your use case. Consider it like stitching collectively a number of capabilities right into a move that truly is sensible in your utility. Beneath, let’s focus on the core logic. Your complete code is offered on Github.
1. State Definition
class State(TypedDict):
messages: Annotated[list, add_messages]
So this state schema is chargeable for the chatbot. It would primarily maintain observe of the message historical past in case your Graph is in a loop; else, it is going to have enter with historical past of messages and append the response from LLM to the earlier historical past.
2. Initialize LangGraph
graph_builder = StateGraph(State)
The above line of code will initialize the graph. This occasion of stategraph is chargeable for sustaining the move of the chatbot (dialog move).
3. Chat Mannequin with Fallbacks
llm_with_fallbacks = init_chat_model("google_genai:gemini-2.0-flash").with_fallbacks(
[init_chat_model("groq:llama-3.3-70b-versatile")]
)
This mainly will make Gemini 2.0 Flash as the first LLM and Llama 3.3 70B because the fallback. If Google’s servers are overloaded or when the API hits charge limits, it is going to begin utilizing Llama 3.3 70B.
4. Instrument Integration
software = TavilySearch(max_results=2)
llm_with_tools = llm_with_fallbacks.bind_tools([tool])
We’re including search instruments to the LLM as effectively. This might be used when the LLM feels prefer it doesn’t have information of the question. It would mainly seek for data utilizing the software, retrieve related data, and reply to the question primarily based on the identical.
5. Chatbot Node Logic
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
That is the perform chargeable for invoking the LLM and getting the response. That is precisely what I used to be speaking about. With LangGraph, you’ll be able to construct a graph made up of a number of capabilities like this. You may department, merge, and even run capabilities (referred to as nodes in LangGraph) in parallel. And sure, I virtually forgot, you’ll be able to even create loops inside the graph. That’s the form of flexibility LangGraph brings to the desk.
6. ToolNode and Conditional Move
tool_node = ToolNode(instruments=[tool])
graph_builder.add_conditional_edges("chatbot", tools_condition)
graph_builder.add_edge("instruments", "chatbot")
We’ll create a node for the software in order that at any time when the chatbot figures out it wants to make use of it, it may merely invoke the software node and fetch the related data.
7. Graph Entry and Exit
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()
from IPython.show import Picture, show
show(Picture(graph.get_graph().draw_mermaid_png()))

The previous code provides the specified visible.
This LangGraph setup permits you to construct a structured chatbot that may deal with conversations, name instruments like internet search when wanted, and fall again to various fashions if one fails. It’s modular, simple to increase. Now that the LangGraph half is finished, let’s soar to the right way to create an API for our chatbot with Django.
Constructing the API with Django
You need to use this information to learn to make an app if you happen to’re new to Django. For this endeavor, now we have established:
- Mission: djangoproj
- App: djangoapp
Step 1: App Configuration
In djangoapp/apps.py, we outline the app config in order that Django can acknowledge it:
from django.apps import AppConfig
class DjangoappConfig(AppConfig):
default_auto_field = "django.db.fashions.BigAutoField"
title = "djangoapp"
Now register the app inside djangoproj/settings.py:
INSTALLED_APPS = [
# default Django apps...
"djangoapp",
]
Step 2: Creating the Chatbot API
In djangoapp/views.py, we outline a easy API endpoint that handles POST requests:
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
from chatbot import get_chatbot_response
@csrf_exempt
def chatbot_api(request):
if request.technique == "POST":
strive:
information = json.masses(request.physique)
messages = information.get("messages", [])
user_query = information.get("question", "")
messages.append({"position": "person", "content material": user_query})
response = get_chatbot_response(messages)
serialized_messages = [serialize_message(msg) for msg in response["messages"]]
return JsonResponse({"messages": serialized_messages})
besides Exception as e:
return JsonResponse({"error": str(e)}, standing=500)
return JsonResponse({"error": "POST request required"}, standing=400)
- This view accepts person enter, passes it to the LangGraph-powered chatbot, and returns the response.
@csrf_exempt
is used for testing/demo functions to permit exterior POST requests.
Step 3: Hooking the API to URLs
In djangoproj/urls.py, wire up the view to an endpoint:
from django.urls import path
from djangoapp.views import chatbot_api, chat_interface
urlpatterns = [
path('', chat_interface, name="chat_interface"),
path('api/chatbot/', chatbot_api, name="chatbot_api"),
]
Now, sending a POST request to /api/chatbot/ will set off the chatbot and return a JSON response.
Step 4: Serving a Primary Chat UI
To point out a easy interface, add this to djangoapp/views.py:
from django.shortcuts import render
def chat_interface(request):
return render(request, 'index.html')
This view renders index.html, a fundamental chat interface.
In djangoproj/settings.py, inform Django the place to search for templates:
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [BASE_DIR / "templates"],
# ...
},
]
We’ve used Django to remodel our LangGraph chatbot right into a useful API with only a few strains of code, and we’ve even included a fundamental person interface for interacting with it. Clear, modular, and easy to broaden, this association is right for each real-world tasks and demos.
Following is the working demo of the chatbot:
Options You Can Construct On Prime
Listed here are a number of the options that you may construct on high of the applying:
- Arrange system prompts and agent personas to information habits and responses.
- Create a number of specialised brokers and a routing agent to delegate duties primarily based on person enter.
- Plug in RAG instruments to herald your personal information and enrich the responses.
- Retailer dialog historical past in a database (like PostgreSQL), linked to person periods for continuity and analytics.
- Implement good message windowing or summarization to deal with token limits gracefully.
- Use immediate templates or instruments like Guardrails AI or NeMo for output validation and security filtering.
- Add assist for dealing with pictures or recordsdata, utilizing succesful fashions like Gemini 2.5 professional or GPT-4.1.
Conclusion
And that’s a wrap! We simply constructed a completely useful chatbot from scratch utilizing LangGraph and Django, full with a clear API, software integration, fallback LLMs, and extra. The very best half? It’s modular and tremendous simple to increase. Whether or not you’re trying to construct a sensible assistant in your personal product, experiment with multi-agent methods, or simply get your palms soiled with LangGraph, this setup provides you a strong start line. There’s much more you’ll be able to discover, from including picture inputs to plugging in your personal information base. So go forward, tweak it, break it, construct on high of it. The chances are broad open. Let me know what you construct.
Incessantly Requested Questions
A. The chatbot makes use of LangGraph for logic orchestration, Django for the API, Pipenv for dependency administration, and integrates LLMs like Gemini and Llama 3, plus the Tavily Search software.
A. It makes use of Gemini 2.0 Flash as the first mannequin and routinely falls again to Llama 3.3 70B if Gemini fails or reaches charge limits.
A. LangGraph buildings the chatbot’s dialog move utilizing nodes and edges, permitting for loops, circumstances, software use, and LLM fallbacks.
A. Set setting variables, run python handle.py migrate
, then python handle.py runserver
, and go to http://127.0.0.1:8000/
.
A. You may add agent personas, database-backed chat historical past, RAG, message summarization, output validation, and multimodal enter assist.
Login to proceed studying and luxuriate in expert-curated content material.