Step-by-Step Information to Construct and Deploy an LLM-Powered Chat with Reminiscence in Streamlit

, I’ll present you step-by-step the right way to construct and deploy a chat powered with LLM — Geminiin Streamlit and monitor the API utilization on Google Cloud Console. Streamlit is a Python framework that makes it tremendous simple to show your Python scripts into interactive net apps, with virtually no front-end work.

Not too long ago, I constructed a mission, bordAI — a chat assistant powered by LLM built-in with instruments I developed to help embroidery tasks. After that, I made a decision to begin this sequence of posts to share suggestions I’ve realized alongside the way in which. 

Right here’s a fast abstract of the put up:

1 to six — Challenge Setup

7 to 13 — Constructing the Chat

14 to fifteen— Deploy and Monitor the app


1. Create a New GitHub repository

Go to GitHub and create a brand new repository.


2. Clone the repository regionally

→ Execute this command in your terminal to clone it:

git clone <your-repository-url>

3. Set Up a Digital Surroundings (non-compulsory)

A Digital Surroundings is sort of a separate area in your laptop the place you may set up a particular model of Python and libraries with out affecting the remainder of your system. That is helpful as a result of completely different tasks may want completely different variations of the identical libraries. 

→ To create a digital surroundings:

pyenv virtualenv 3.9.14 chat-streamlit-tutorial

→ To activate it:

pyenv activate chat-streamlit-tutorial

4. Challenge Construction

A mission construction is only a solution to arrange all of the information and folders on your mission. Ours will seem like this:

chat-streamlit-tutorial/
│
├── .env
├── .gitignore
├── app.py
├── capabilities.py
├── necessities.txt
└── README.md
  • .env→ file the place you retailer your API key (not pushed to GitHub)
  • .gitignore → file the place you record the information or folders for git to disregard 
  • app.py → important streamlit app
  • capabilities.py → customized capabilities to raised arrange the code
  • necessities.txt → record of libraries your mission wants
  • README.md → file that explains what your mission is about

→ Execute this inside your mission folder to create these information:

contact .env .gitignore app.py capabilities.py necessities.txt

→ Contained in the file .gitignore, add:

.env
__pycache__/

→ Add this to the necessities.txt:

streamlit
google-generativeai
python-dotenv

→ Set up dependencies:

pip set up -r necessities.txt

5. Get API Key

An API Key is sort of a password that tells a service you’ve got permission to make use of it. On this mission, we’ll use the Gemini API as a result of they’ve a free tier, so you may mess around with it with out spending cash. 

Don’t arrange billing in case you simply need to use the free tier. It ought to say “Free” underneath “Plan”, identical to right here:

Picture by the writer

We’ll use gemini-2.0-flash on this mission. It presents a free tier, as you may see within the desk under:

Screenshot by the writer from https://aistudio.google.com/plan_information
  • 15 RPM = 15 Requests per minute
  • 1,000,000 TPM = 1 Million Tokens Per Minute
  • 1,500 RPD = 1,500 Requests Per Day

Notice: These limits are correct as of April 2025 and should change over time. 

Only a heads up: if you’re utilizing the free tier, Google might use your prompts to enhance their merchandise, together with human opinions, so it’s not really useful to ship delicate data. If you wish to learn extra about this, test this hyperlink.


6. Retailer your API Key

We’ll retailer our API Key inside a .env file. A .env file is an easy textual content file the place you retailer secret data, so that you don’t write it instantly in your code. We don’t need it going to GitHub, so we’ve so as to add it to our .gitignore file. This file determines which information git ought to actually ignore once you push your adjustments to the repository. I’ve already talked about this partially 4, “Challenge Construction”, however simply in case you missed it, I’m repeating it right here.

This step is de facto necessary, don’t overlook it!
→ Add this to .gitignore

.env
__pycache__/

→ Add the API Key to .env:

API_KEY= "your-api-key"

For those who’re operating regionally, .env works nice. Nonetheless, in case you’re deploying in Streamlit later, you’ll have to use st.secrets and techniques. Right here I’ve included a code that may work in each eventualities. 

→Add this perform to your capabilities.py:

import streamlit as st
import os
from dotenv import load_dotenv

def get_secret(key):
    """
    Get a secret from Streamlit or fallback to .env for native improvement.

    This enables the app to run each on Streamlit Cloud and regionally.
    """
    attempt:
        return st.secrets and techniques[key]
    besides Exception:
        load_dotenv()
        return os.getenv(key)

→ Add this to your app.py:

import streamlit as st
import google.generativeai as genai
from capabilities import get_secret

api_key = get_secret("API_KEY")

7. Select the mannequin 

I selected gemini-2.0-flash for this mission as a result of I feel it’s a fantastic mannequin with a beneficiant free tier. Nonetheless, you may discover different mannequin choices that additionally provide free tiers and select your most well-liked one.

Screenshot by the writer from https://aistudio.google.com/plan_information
  • Professional: fashions designed for excessivehigh quality outputs, together with reasoning and creativity. Typically used for complicated duties, problem-solving, and content material era. They’re multimodal — this implies they’ll course of textual content, picture, video, and audio for enter and output.
  • Flash: fashions projected for velocity and value effectivity. Can have lower-quality solutions in comparison with the Professional for complicated duties. Typically used for chatbots, assistants, and real-time purposes like automated phrase completion. They’re multimodal for enter, and for output is presently simply textual content, different options are in improvement.
  • Lite: even quicker and cheaper than Flash, however with some decreased capabilities, akin to it’s multimodal just for enter and text-only output. Its important attribute is that it’s extra economical than the Flash, excellent for producing massive quantities of textual content inside value restrictions.

This hyperlink has loads of particulars concerning the fashions and their variations.

Right here we’re organising the mannequin. Simply exchange “gemini-2.0-flash” with the mannequin you’ve chosen. 

→ Add this to your app.py:

genai.configure(api_key=api_key)
mannequin = genai.GenerativeModel("gemini-2.0-flash")

8. Construct the chat

First, let’s focus on the important thing ideas we’ll use:

  • st.session_state: this works like a reminiscence on your app. Streamlit reruns your script from high to backside each time one thing adjustments — once you ship a message or click on a button —  so usually, all of the variables can be reset. This enables Streamlit to recollect values between reruns. Nonetheless, in case you refresh your net web page you’ll lose the session_state
  • st.chat_message(title, avatar): Creates a chat bubble for a message within the interface. The primary parameter is the title of the message writer, which could be “consumer”, “human”, “assistant”, “ai”, or str. For those who use consumer/human and assistant/ai, it already has default avatars of consumer and bot icons. You’ll be able to change this if you wish to. Take a look at the documentation for extra particulars.
  • st.chat_input(placeholder): Shows an enter field on the backside for the consumer to sort messages. It has many parameters, so I like to recommend you take a look at the documentation

First, I’ll clarify every a part of the code individually, and after I’ll present you the entire code collectively. 

This preliminary step initializes your session_state, the app’s “reminiscence”, to maintain all of the messages inside one session. 

if "chat_history" not in st.session_state:
    st.session_state.chat_history = []

Subsequent, we’ll set the primary default message. That is non-compulsory, however I like so as to add it. You could possibly add some preliminary directions if appropriate on your context. Each time Streamlit runs the web page and st.session_state.chat_history is empty, it’ll append this message to the historical past with the function “assistant”.

if not st.session_state.chat_history:
    st.session_state.chat_history.append(("assistant", "Hello! How can I enable you?"))

In my app bordAI, I added this preliminary message giving context and directions for my app:

Picture by the writer

For the consumer half, the primary line creates the enter field. If user_message accommodates content material, it writes it to the interface after which appends it to chat_history

user_message = st.chat_input("Sort your message...")

if user_message:
    st.chat_message("consumer").write(user_message)
    st.session_state.chat_history.append(("consumer", user_message))

Now let’s add the assistant half:

  • system_prompt is the immediate despatched to the mannequin. You could possibly simply ship the user_message rather than full_input (have a look at the code under). Nonetheless, the output won’t be exact. A immediate supplies context and directions about how you need the mannequin to behave, not simply what you need it to reply. A very good immediate makes the mannequin’s response extra correct, constant, and aligned along with your objectives. As well as, with out telling how our mannequin ought to behave, it’s weak to immediate injections

Immediate injection is when somebody tries to govern the mannequin’s immediate as a way to alter its conduct. One solution to mitigate that is to construction prompts clearly and delimit the consumer’s message inside triple quotes. 

We’ll begin with a easy and unclear system_prompt and within the subsequent session we’ll make it higher to check the distinction. 

  • full_input: right here, we’re organizing the enter, delimiting the consumer message with triple quotes (“””). This doesn’t stop all immediate injections, however it’s one solution to create higher and extra dependable interactions. 
  • response: sends a request to the API, storing the output in response. 
  • assistant_reply: extracts the textual content from the response.

Lastly, we use st.chat_message() mixed to write() to show the assistant reply and append it to the st.session_state.chat_history, identical to we did with the consumer. 

if user_message:
    st.chat_message("consumer").write(user_message)
    st.session_state.chat_history.append(("consumer", user_message))
    
    system_prompt = f"""
    You're an assistant.
    Be good and sort in all of your responses.
    """
    full_input = f"{system_prompt}nnUser message:n"""{user_message}""""

    response = mannequin.generate_content(full_input)
    assistant_reply = response.textual content

    st.chat_message("assistant").write(assistant_reply)
    st.session_state.chat_history.append(("assistant", assistant_reply))

Now let’s see all the pieces collectively!

→ Add this to your app.py:

import streamlit as st
import google.generativeai as genai
from capabilities import get_secret

api_key = get_secret("API_KEY")
genai.configure(api_key=api_key)
mannequin = genai.GenerativeModel("gemini-2.0-flash")

if "chat_history" not in st.session_state:
    st.session_state.chat_history = []

if not st.session_state.chat_history:
    st.session_state.chat_history.append(("assistant", "Hello! How can I enable you?"))

user_message = st.chat_input("Sort your message...")

if user_message:
    st.chat_message("consumer").write(user_message)
    st.session_state.chat_history.append(("consumer", user_message))

    system_prompt = f"""
    You're an assistant.
    Be good and sort in all of your responses.
    """
    full_input = f"{system_prompt}nnUser message:n"""{user_message}""""

    response = mannequin.generate_content(full_input)
    assistant_reply = response.textual content

    st.chat_message("assistant").write(assistant_reply)
    st.session_state.chat_history.append(("assistant", assistant_reply))

To run and take a look at your app regionally, first navigate to the mission folder, then execute the next command.

→ Execute in your terminal:

cd chat-streamlit-tutorial
streamlit run app.py

Yay! You now have a chat operating in Streamlit!


9. Immediate Engineering 

Immediate Engineering is a technique of writing directions to get the absolute best output from an AI mannequin. 

There are many strategies for immediate engineering. Listed below are 5 suggestions:

  1. Write clear and particular directions.
  2. Outline a job, anticipated conduct, and guidelines for the assistant.
  3. Give the correct amount of context.
  4. Use the delimiters to point consumer enter (as I defined partially 8).
  5. Ask for the output in a specified format.

The following tips could be utilized to the system_prompt or once you’re writing a immediate to work together with the chat assistant.

Our present system immediate is:

system_prompt = f"""
You're an assistant.
Be good and sort in all of your responses.
"""

It’s tremendous imprecise and supplies no steering to the mannequin. 

  • No clear course for the assistant, what sort of assist it ought to present
  • No specification of the function or what’s the matter of the help
  • No tips for structuring the output
  • No context on whether or not it ought to be technical or informal
  • Lack of boundaries 

We will enhance our immediate primarily based on the information above. Right here’s an instance.

→ Change the system_prompt within the app.py

system_prompt = f"""
You're a pleasant and a programming tutor.
At all times clarify ideas in a easy and clear method, utilizing examples when attainable.
If the consumer asks one thing unrelated to programming, politely carry the dialog again to programming matters.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""

If we ask “What’s python?” to the outdated immediate, it simply offers a generic quick reply:

Picture by the writer

With the brand new immediate, it supplies a extra detailed response with examples:

Picture by the writer
Picture by the writer

Strive altering the system_prompt your self to see the distinction within the mannequin outputs and craft the best immediate on your context!


10. Select Generate Content material Parameters

There are a lot of parameters you may configure when producing content material. Right here I’ll display how temperature and maxOutputTokens work. Test the documentation for extra particulars.

  • temperature: controls the randomness of the output, starting from 0 to 2. The default is 1. Decrease values produce extra deterministic outputs, whereas greater values produce extra artistic ones.
  • maxOutputTokens: the utmost variety of tokens that may be generated within the output. A token is roughly 4 characters. 

To vary the temperature dynamically and take a look at it, you may create a sidebar slider to manage this parameter.

→ Add this to app.py:

temperature = st.sidebar.slider(
    label="Choose the temperature",
    min_value=0.0,
    max_value=2.0,
    worth=1.0
)

→ Change the response variable to:

response = mannequin.generate_content(
    full_input,
    generation_config={
        "temperature": temperature,
        "max_output_tokens": 1000
    }
)

The sidebar will seem like this:

Picture by the writer

Strive adjusting the temperature to see how the output adjustments!


11. Show chat historical past 

This step ensures that you just hold monitor of all of the exchanged messages within the chat, so you may see the chat historical past. With out this, you’d solely see the newest messages from the assistant and consumer every time you ship one thing.

This code accesses all the pieces appended to chat_history and shows it within the interface.

→ Add this earlier than the if user_message in app.py:

for function, message in st.session_state.chat_history:
    st.chat_message(function).write(message)

Now, all of the messages inside one session are saved seen within the interface:

Picture by the writer

Obs: I attempted to ask a non-programming query, and the assistant tried to vary the topic again to programming. Our immediate is working!


12. Chat with reminiscence 

In addition to having messages saved in chat_history, our mannequin isn’t conscious of the context of our dialog. It’s stateless, every transaction is impartial. 

Picture by the writer

To unravel this, we’ve to move all this context inside our immediate so the mannequin can reference earlier messages exchanged. 

Create context which is an inventory containing all of the messages exchanged till that second. Including lastly the latest consumer message, so it doesn’t get misplaced within the context.

system_prompt = f"""
You're a pleasant and educated programming tutor.
At all times clarify ideas in a easy and clear method, utilizing examples when attainable.
If the consumer asks one thing unrelated to programming, politely carry the dialog again to programming matters.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""

context = [
    *[
        {"role": role, "parts": [{"text": msg}]} for function, msg in st.session_state.chat_history
    ],
    {"function": "consumer", "elements": [{"text": full_input}]}
]

response = mannequin.generate_content(
    context,
    generation_config={
        "temperature": temperature,
        "max_output_tokens": 1000
    }
)

Now, I informed the assistant that I used to be engaged on a mission to investigate climate knowledge. Then I requested what the theme of my mission was and it accurately answered “climate knowledge evaluation”, because it now has the context of the earlier messages. 

Picture by the writer

In case your context will get too lengthy, you may think about summarizing it to save lots of prices, for the reason that extra tokens you ship to the API, the extra you’ll pay.


13. Create a Reset Button (non-compulsory) 

I like including a reset button in case one thing goes unsuitable or the consumer simply needs to clear the dialog. 

You simply must create a perform to set de chat_history as an empty record. For those who created different session states, it is best to set them right here as False or empty, too. 

→ Add this to capabilities.py

def reset_chat():
    """
    Reset the Streamlit chat session state.
    """
    st.session_state.chat_history = []
    st.session_state.instance = False # Add others if wanted

→ And in order for you it within the sidebar, add this to app.py:

from capabilities import get_secret, reset_chat

if st.sidebar.button("Reset chat"):
    reset_chat()

It is going to seem like this:

Picture by the writer

Every little thing collectively:

import streamlit as st
import google.generativeai as genai
from capabilities import get_secret, reset_chat

api_key = get_secret("API_KEY")
genai.configure(api_key=api_key)
mannequin = genai.GenerativeModel("gemini-2.0-flash")

temperature = st.sidebar.slider(
    label="Choose the temperature",
    min_value=0.0,
    max_value=2.0,
    worth=1.0
)

if st.sidebar.button("Reset chat"):
    reset_chat()

if "chat_history" not in st.session_state:
    st.session_state.chat_history = []

if not st.session_state.chat_history:
    st.session_state.chat_history.append(("assistant", "Hello! How can I enable you?"))

for function, message in st.session_state.chat_history:
    st.chat_message(function).write(message)

user_message = st.chat_input("Sort your message...")

if user_message:
    st.chat_message("consumer").write(user_message)
    st.session_state.chat_history.append(("consumer", user_message))

    system_prompt = f"""
    You're a pleasant and a programming tutor.
    At all times clarify ideas in a easy and clear method, utilizing examples when attainable.
    If the consumer asks one thing unrelated to programming, politely carry the dialog again to programming matters.
    """
    full_input = f"{system_prompt}nnUser message:n"""{user_message}""""

    context = [
        *[
            {"role": role, "parts": [{"text": msg}]} for function, msg in st.session_state.chat_history
        ],
        {"function": "consumer", "elements": [{"text": full_input}]}
    ]

    response = mannequin.generate_content(
        context,
        generation_config={
            "temperature": temperature,
            "max_output_tokens": 1000
        }
    )
    assistant_reply = response.textual content

    st.chat_message("assistant").write(assistant_reply)
    st.session_state.chat_history.append(("assistant", assistant_reply))

14. Deploy

In case your repository is public, you may deploy with Streamlit totally free. 

MAKE SURE YOU DO NOT HAVE API KEYS ON YOUR PUBLIC REPOSITORY.

First, save and push your code to the repository.

→ Execute in your terminal:

git add .
git commit -m "tutorial chat streamlit"
git push origin important

Pushing instantly into the important isn’t a finest follow, however because it’s only a easy tutorial, we’ll do it for comfort. 

  1. Go to your streamlit app that’s operating regionally.
  2. Click on on “Deploy” on the high proper.
  3. In Streamlit Neighborhood Cloud, click on “Deploy now”.
  4. Fill out the knowledge.
Picture by the writer

5. Click on on “Superior settings” and write API_KEY="your-api-key", identical to you probably did with the .env file. 

6. Click on “Deploy”.

All carried out! For those who’d like, take a look at my app right here! 🎉


15. Monitor API utilization on Google Console 

The final a part of this put up reveals you the right way to monitor API utilization on the Google Cloud Console. That is necessary in case you deploy your app publicly, so that you don’t have any surprises.

  1. Entry Google Cloud Console.
  2. Go to “APIs and companies”.
  3. Click on on “Generative Language API”.
Picture by the writer
  • Requests: what number of instances your API was known as. In our case, the API is known as every time we run mannequin.generate_content(context).
  • Error (%): the proportion of requests that failed. Errors can have the code 4xx which is often the consumer’s/requester’s fault — for example, 400 for dangerous enter, and 429 means you’re hitting the API too often. As well as, errors with the code 5xx are often the system’s/server’s fault and are much less frequent. Google usually retries internally or recommends retrying after a number of seconds — e.g. 500 for Inner Server Error and 503 for Service Unavailable.
  • Latency, median (ms): This reveals how lengthy (in milliseconds) it takes on your service to reply, on the fiftieth percentile — that means half the requests are quicker and half are slower. It’s a superb common measure of your service’s velocity, answering the query, “How briskly is it usually?”.
  • Latency, 95% (ms): This reveals the response time on the ninety fifth percentile — that means 95% of requests are quicker than this time, and solely 5% slower. It helps to determine how your system behaves underneath heavy load or with slower circumstances, answering the query, “How dangerous is it getting for some customers?”.

A fast instance of the distinction between Latency median and Latency p95:
Think about your service often responds in 200ms:

  • Median latency = 200ms (good!)
  • p95 latency = 220ms (additionally good)

Now underneath heavy load:

  • Median latency = 220ms (nonetheless appears OK)
  • p95 latency = 1200ms (not good)

The metric p95 reveals that 5% of your customers are ready greater than 1.2 seconds — a a lot worse expertise. If we had regarded simply on the median, we’d assume all the pieces was nice, however p95 reveals hidden issues.

Persevering with within the “Metrics” web page, you’ll discover graphs and, on the backside, the strategies known as by the API. Additionally, in “Quotas & System Limits”, you may monitor the API utilization in comparison with the free tier restrict.

Picture by the writer

Click on “Present utilization chart” to check utilization daily.

Picture by the writer

I hope you loved this tutorial. 

You will discover all of the code for this mission on my GitHub.

I’d love to listen to your ideas! Let me know within the feedback what you assume.

Observe me on: