
I’m an enormous fan of interactive visualizations. As a pc imaginative and prescient engineer, I deal virtually each day with picture processing associated duties and as a rule I’m iterating on an issue the place I want visible suggestions to make selections. Let’s consider a quite simple picture processing pipeline with a single step that has some parameters to remodel a picture:

How are you aware which parameters to regulate? Does the pipeline even work as anticipated? With out visualizing your output, you would possibly miss out on some key insights and make sub optimum decisions.
Generally merely displaying the output picture and/or some calculated metrics might be sufficient to iterate on the parameters. However I’ve discovered myself in lots of conditions the place a device can be immensely useful to iterate rapidly and interactively on my pipeline. So on this article I’ll present you how one can work with easy built-in interactive components from OpenCV
in addition to how one can construct extra fashionable person interfaces for Laptop Imaginative and prescient tasks utilizing customtkinter
.
Stipulations
If you wish to observe alongside, I like to recommend you to arrange your native setting with uv and set up the next packages:
uv add numpy opencv-Python pillow customtkinter
Objective
Earlier than we dive into the code of the challenge, let’s rapidly define what we need to construct. The appliance ought to use the webcam feed and permit the person to pick several types of filters that shall be utilized to the stream. The processed picture ought to be proven in real-time within the window. A tough sketch of a possible UI would look as follows:

OpenCV – GUI
Let’s begin with a easy loop that fetches frames out of your webcam and shows them in an OpenCV window.
import cv2
cap = cv2.VideoCapture(0)
whereas True:
ret, body = cap.learn()
if not ret:
break
cv2.imshow("Video Feed", body)
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.launch()
cv2.destroyAllWindows()
Keyboard Enter
The best means so as to add interactivity right here, is by including keyboard inputs. For instance, we are able to cycle by totally different filters with the quantity keys.
...
filter_type = "regular"
whereas True:
...
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
go
...
if key == ord('1'):
filter_type = "regular"
if key == ord('2'):
filter_type = "grayscale"
...
Now you possibly can swap between the traditional picture and the grayscale model by urgent the quantity keys 1 and a pair of. Let’s additionally rapidly add a caption to the picture so we are able to truly see the title of the filter we’re making use of.
Now we have to be cautious right here: in case you check out the form of the body after the filter, you’ll discover that the dimensionality of the body array has modified. Keep in mind that OpenCV picture arrays are ordered HWC (peak, width, colour) with colour as BGR (inexperienced, blue, crimson), so the 640×480 picture from my webcam has form (480, 640, 3)
.
print(filter_type, body.form)
# regular (480, 640, 3)
# grayscale (480, 640)
Now as a result of the grayscale operation outputs a single channel picture, the colour dimension is dropped. If we now need to draw on prime of this picture, we both must specify a single channel colour for the grayscale picture or we convert that picture again to the unique BGR format. The second choice is a bit cleaner as a result of we are able to unify the annotation of the picture.
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
go
if len(body.form) == 2: # Convert grayscale to BGR
body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
Caption
I need to add a black border on the backside of the picture, on prime of which the title of the filter shall be proven. We will make use of the copyMakeBorder
perform to pad the picture with a border colour on the backside. Then we are able to add the textual content on prime of this border.
# Add a black border on the backside of the body
border_height = 50
border_color = (0, 0, 0)
body = cv2.copyMakeBorder(body, 0, border_height, 0, 0, cv2.BORDER_CONSTANT, worth=border_color)
# Present the filter title
cv2.putText(
body,
filter_type,
(body.form[1] // 2 - 50, body.form[0] - border_height // 2 + 10),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(255, 255, 255),
2,
cv2.LINE_AA,
)
That is how the output ought to look, and you may swap between the traditional and grayscale mode and the frames shall be captioned accordingly.

Sliders
Now as a substitute of utilizing the keyboard as enter methodology, OpenCV provides a fundamental trackbar slider UI aspect. The trackbar must be initialized originally of the script. We have to reference the identical window as we shall be displaying our photos in later, so I’ll create a variable for the title of the window. Utilizing this title, we are able to create the trackbar and let it’s a selector for the index within the record of filters.
filter_types = ["normal", "grayscale"]
win_name = "Webcam Stream"
cv2.namedWindow(win_name)
tb_filter = "Filter"
# def createTrackbar(trackbarName: str, windowName: str, worth: int, depend: int, onChange: _typing.Callable[[int], None]) -> None: ...
cv2.createTrackbar(
tb_filter,
win_name,
0,
len(filter_types) - 1,
lambda _: None,
)
Discover how we use an empty lambda for the onChange
callback, we’ll fetch the worth manually within the loop. The whole lot else will keep the identical.
whereas True:
...
# Get the chosen filter sort
filter_id = cv2.getTrackbarPos(tb_filter, win_name)
filter_type = filter_types[filter_id]
...
And voilà, we now have a trackbar to pick our filter.

Now we are able to additionally simply add extra filters simply by extending our record and implementing every processing step.
filter_types = [
"normal",
"grayscale",
"blur",
"threshold",
"canny",
"sobel",
"laplacian",
]
...
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "blur":
body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
elif filter_type == "threshold":
grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
_, thresholded_frame = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
elif filter_type == "canny":
body = cv2.Canny(body, threshold1=100, threshold2=200)
elif filter_type == "sobel":
body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
elif filter_type == "laplacian":
body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
elif filter_type == "regular":
go
if body.dtype != np.uint8:
# Scale the body to uint8 if obligatory
cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
body = body.astype(np.uint8)

Fashionable GUI with CustomTkinter
Now I don’t learn about you however the present person interface doesn’t look very fashionable to me. Don’t get me unsuitable, there may be some magnificence within the fashion of the interface, however I favor cleaner, extra fashionable designs. Plus we’re already on the restrict of what OpenCV provides out of the field by way of UI components. Yep, no buttons, textual content fields, dropdowns, checkboxes or radio buttons and no customized layouts. So let’s see how we are able to remodel the look and person expertise of this fundamental utility to a recent and clear one.

So to get began, we first must create a category for our app. We create two frames: the primary one accommodates our filter choice on the left facet and the second wraps the picture show. For now, let’s begin with a easy placeholder textual content. Sadly there’s no out of the field opencv part from customtkinter immediately, so we might want to rapidly construct our personal within the subsequent few steps. However let’s first end the essential UI structure.
import customtkinter
class App(customtkinter.CTk):
def __init__(self) -> None:
tremendous().__init__()
self.title("Webcam Stream")
self.geometry("800x600")
self.filter_var = customtkinter.IntVar(worth=0)
# Body for filters
self.filters_frame = customtkinter.CTkFrame(self)
self.filters_frame.pack(facet="left", fill="each", increase=False, padx=10, pady=10)
# Body for picture show
self.image_frame = customtkinter.CTkFrame(self)
self.image_frame.pack(facet="proper", fill="each", increase=True, padx=10, pady=10)
self.image_display = customtkinter.CTkLabel(self.image_frame, textual content="Loading...")
self.image_display.pack(fill="each", increase=True, padx=10, pady=10)
app = App()
app.mainloop()

Filter Radio Buttons
Now that the skeleton is constructed, we are able to begin filling in our parts. For the left facet, I shall be utilizing the identical record of filter_types
to populate a gaggle of radio buttons to pick the filter.
# Create radio buttons for every filter sort
self.filter_var = customtkinter.IntVar(worth=0)
for filter_id, filter_type in enumerate(filter_types):
rb_filter = customtkinter.CTkRadioButton(
self.filters_frame,
textual content=filter_type.capitalize(),
variable=self.filter_var,
worth=filter_id,
)
rb_filter.pack(padx=10, pady=10)
if filter_id == 0:
rb_filter.choose()

Picture Show Part
Now we are able to get began on the fascinating half, how one can get our OpenCV frames to indicate up within the picture part. As a result of there’s no built-in part, let’s create our personal primarily based on the CTKLabel
. This enables us to show a loading textual content whereas the webcam stream is beginning up.
...
class CTkImageDisplay(customtkinter.CTkLabel):
"""
A reusable ctk widget widget to show opencv photos.
"""
def __init__(
self,
grasp: Any,
) -> None:
self._textvariable = customtkinter.StringVar(grasp, "Loading...")
tremendous().__init__(
grasp,
textvariable=self._textvariable,
picture=None,
)
...
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.image_display = CTkImageDisplay(self.image_frame)
self.image_display.pack(fill="each", increase=True, padx=10, pady=10)
To this point nothing has modified, we merely swapped out the present label with our customized class implementation. In our CTKImageDisplay
class we are able to outline an perform to indicate a picture within the part, let’s name it set_frame
.
import cv2
import numpy.typing as npt
from PIL import Picture
class CTkImageDisplay(customtkinter.CTkLabel):
...
def set_frame(self, body: npt.NDArray) -> None:
"""
Set the body to be displayed within the widget.
Args:
body: The brand new body to show, in opencv format (BGR).
"""
target_width, target_height = body.form[1], body.form[0]
# Convert the body to PIL Picture format
frame_rgb = cv2.cvtColor(body, cv2.COLOR_BGR2RGB)
frame_pil = Picture.fromarray(frame_rgb, "RGB")
ctk_image = customtkinter.CTkImage(
light_image=frame_pil,
dark_image=frame_pil,
measurement=(target_width, target_height),
)
self.configure(picture=ctk_image, textual content="")
self._textvariable.set("")
Let’s digest this. First we have to understand how huge our picture part shall be, we are able to extract that data from the form property of our picture array. To show the picture in tkinter
, we’d like a Pillow Picture
sort, we can not immediately use the OpenCV array. To transform an OpenCV array to Pillow, we first must convert the colour house from BGR to RGB after which we are able to use the Picture.fromarray
perform to create the Pillow Picture object. Subsequent we are able to create a CTKImage, the place we use the identical picture irrespective of the theme and set the scale in keeping with our body. Lastly we are able to use the configure methodology to set the picture in our body. On the finish, we additionally reset the textual content variable to take away the “Loading…” textual content, although it could theoretically be hidden behind the picture.
To rapidly take a look at this, we are able to set the primary picture of our webcam within the constructor. (We’ll see in a second why this isn’t such a good suggestion)
class App(customtkinter.CTk):
def __init__(self) -> None:
...
cap = cv2.VideoCapture(0)
_, frame0 = cap.learn()
self.image_display.set_frame(frame0)
In the event you run this, you’ll discover that the window takes a bit longer to pop up, however after a brief delay you must see a static picture out of your webcam.
NOTE: In the event you don’t have a webcam prepared you too can simply use an area video file by passing the file path to the
cv2.VideoCapture
constructor name.

Now this isn’t very thrilling, because the body doesn’t replace but. So let’s see what occurs if we strive to do that naively.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
cap = cv2.VideoCapture(0)
whereas True:
ret, body = cap.learn()
if not ret:
break
self.image_display.set_frame(body)
Nearly the identical as earlier than, besides now we run the body loop as we did within the earlier chapter with the OpenCV GUI. In the event you run this, you will notice… precisely nothing. The window by no means exhibits up, since we’re creating an infinite loop within the constructor of the app! That is additionally the rationale why this system solely confirmed up after a delay within the earlier instance, the opening of the Webcam stream is a blocking operation, and the occasion loop for the window can not run, so it doesn’t present up but.
So let’s repair this by including a barely higher implementation that enables the gui occasion loop to run whereas we additionally replace the body each now and again. We will use the after
methodology of tkinter
to schedule a perform name whereas yielding the method in the course of the wait time.
...
self.cap = cv2.VideoCapture(0)
self.after(10, self.update_frame)
def update_frame(self) -> None:
"""
Replace the displayed body.
"""
ret, body = self.cap.learn()
if not ret:
return
self.image_display.set_frame(body)
self.after(10, self.update_frame)
So now we nonetheless arrange the webcam stream within the constructor, so we haven’t solved that drawback but. However at the least we are able to see a steady stream of frames in our picture part.

Making use of Filters
Now that the body loop is working. we are able to re-implement our filters from the start and apply them to our webcam stream. Within the update_frame perform, we are able to test the present filter variable and apply the corresponding filter perform.
def update_frame(self) -> None:
...
# Get the chosen filter sort
filter_id = self.filter_var.get()
filter_type = filter_types[filter_id]
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "blur":
body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
elif filter_type == "threshold":
grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
_, body = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
elif filter_type == "canny":
body = cv2.Canny(body, threshold1=100, threshold2=200)
elif filter_type == "sobel":
body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
elif filter_type == "laplacian":
body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
elif filter_type == "regular":
go
if body.dtype != np.uint8:
# Scale the body to uint8 if obligatory
cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
body = body.astype(np.uint8)
if len(body.form) == 2: # Convert grayscale to BGR
body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
self.image_display.set_frame(body)
self.after(10, self.update_frame)
And now we’re again to the complete performance of the applying, you possibly can choose any filter on the left facet and will probably be utilized in real-time to the webcam feed!

Multithreading and Synchronization
Now though the applying runs as is, there are some issues with the present means we run our body loop. Presently every thing runs in a single thread, the principle GUI thread. Because of this to start with, we don’t instantly see the window pop up, our webcam initialization blocks the principle thread. Now think about, if we did some heavier picture processing, possibly working the photographs by neural community, you wouldn’t need your person interface to at all times be blocked whereas the community is working inference. This can result in a really unresponsive person expertise when clicking the UI components!

A greater method to deal with this in our utility is to separate the picture processing from the person interface. Usually that is virtually at all times a good suggestion to separate your GUI logic from any sort of non-trivial processing. So in our case, we’ll run a separate thread that’s chargeable for the picture loop. It can learn the frames from the webcam stream and apply the filters.

NOTE: Python threads will not be “actual” threads in a way that they don’t have the aptitude to run on totally different logical cpu cores and therefore won’t actually run in parallel. In Python multithreading the context will swap between the threads, however because of the GIL, the worldwide interpreter lock, a single python course of can solely run one bodily thread. In order for you “actual” parallel processing, you have to to make use of multiprocessing. Since our course of right here is just not CPU certain however truly I/O certain, multithreading suffices.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
self.webcam_thread.begin()
def run_webcam_loop(self) -> None:
"""
Run the webcam loop in a separate thread.
"""
self.cap = cv2.VideoCapture(0)
if not self.cap.isOpened():
return
whereas True:
ret, body = self.cap.learn()
if not ret:
break
# Filters
...
self.image_display.set_frame(body)
In the event you run this, you’ll now see that our window opens up instantly and we even see our loading textual content whereas the webcam stream is opening up. Nonetheless, as quickly because the stream begins, the frames begin to flicker. Relying on a number of elements, you would possibly expertise totally different visible artifacts or errors at this stage.
Warning: flashing picture

Now why is that this taking place? The issue is that we’re concurrently attempting to replace the brand new body whereas the inner refresh loop of the person interface may be utilizing the knowledge of the array to attract it on the display screen. They’re each competing for a similar body array.
It’s usually not a good suggestion to immediately replace the UI components from a unique thread, in some frameworks this would possibly even be prevented and can elevate exceptions. In Tkinter, we are able to do it, however we’ll get bizarre outcomes. We want some sort of synchronization between our threads. That’s the place the Queue
comes into play.

You’re in all probability acquainted with queues from the grocery retailer or theme parks. The idea of the queue right here could be very related: the primary aspect that goes into the queue additionally leaves first (First In First Out).
On this case, we truly simply need a queue with a single aspect, a single slot queue. The queue implementation in Python is thread-safe, which means we are able to put and get objects from the queue from totally different threads. Good for our use case, the processing thread will put the picture arrays to the queue and the GUI thread will attempt to get a component, however not block if the queue is empty.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.queue = queue.Queue(maxsize=1)
self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
self.webcam_thread.begin()
self.frame_loop_dt_ms = 16 # ~60 FPS
self.after(self.frame_loop_dt_ms, self._update_frame)
def _update_frame(self) -> None:
"""
Replace the body within the picture show widget.
"""
strive:
body = self.queue.get_nowait()
self.image_display.set_frame(body)
besides queue.Empty:
go
self.after(self.frame_loop_dt_ms, self._update_frame)
def run_webcam_loop(self) -> None:
...
whereas True:
...
self.queue.put(body)
Discover how we transfer the direct name to the set_frame
perform from the webcam loop which runs in its personal thread to the _update_frame
perform that’s working on the principle thread, repeatedly scheduled in 16ms intervals.
Right here it’s essential to make use of the get_nowait
perform in the principle thread, in any other case if we’d use the get perform, we’d be blocking there. This name does not block, however raises a queue.Empty
exception if there’s no aspect to fetch so we now have to catch this and ignore it. Within the webcam loop, we are able to use the blocking put perform as a result of it doesn’t matter that we block the run_webcam_loop
, there’s nothing else needing to run there.

And now every thing is working as anticipated, no extra flashing frames!
Conclusion
Combining a UI framework like Tkinter with OpenCV permits us to construct fashionable wanting functions with an interactive graphical person interface. As a result of UI working in the principle thread, we run the picture processing in a separate thread and synchronize the information between the threads utilizing a single-slot queue. You will discover a cleaned up model of this demo within the repository under with a extra modular construction. Let me know in case you construct one thing fascinating with this method. Take care!
Checkout the complete supply code within the GitHub repo:
https://github.com/trflorian/ctk-opencv