Enhanced Media Experience with AI-Powered Commercial Detection and Replacement

Hello mentors,
Could you please review my intro video? After your feedback, I will make it public.
Video Link

Week 0-1 Blog Link - Dataset Collection and Feature Extraction

3 Likes

Hi @FredEckert
I have received these items for the project:

  • BeagleBone AI-64
  • BeagleBone AI-64 UART cable
  • 1x HDMI to USB
  • 1x HDMI to CSI
  • 2x HDMI to HDMI
  • 1x 6pin FTDI cable
  • 1x FTDI UART adapter
  • 1x Logic Analyzer
  • 1x Active miniDP to HDMI

Hope this helps,
Thanks

Hi @Aryan_Nanda, Thank you for this information.

Hi @lorforlinux,

Would it be possible to share the manufacturer and part numbers for these items? If I decide to purchase, I want to get the correct item(s) to ensure compatibility.

EDIT: Really only need info on:

  • HDMI to USB
  • HDMI to CSI

Thanks,
Fred Eckert

1 Like

I have received these:

  • Hdmi to usb video capture card Link
  • Hdmi to csi converter Link

Could you please verify this, @lorforlinux?

@Aryan_Nanda @FredEckert Yes, those links are correct. The piBox HDMI to USB also works great. The HDMI to CSI converter captures I2S Audio in LPCM format but I have not tested it with BeagleBone AI-64. Both support 1080p 30fps but HDMI to CSI is much more expensive than the HDMI to USB.

1 Like

Week 2-3 Blog Link - Dataset Preprocessing

I’m new to blogging, so any suggestions are welcome!

Any hopes of a BeagleY-AI real-time implementation?

For BeagleY-AI, we can first optimize the current pipeline without changing its high-level structure:

  • TI already provides a pre-trained quantized InceptionNetV3 model for TIDL
    This model could be used for feature extraction instead of running InceptionV3 in TensorFlow/Keras on the CPU.

  • The classifier can also be moved to TFLite with TIDL delegation to avoid FP32 CPU inference.

On the video side, the OpenCV-based pipeline can be migrated to GStreamer:

  • v4l2src or rtspsrc for input

  • appsink for passing frames to inference

  • appsrc for pushing frames back

  • kmssink for display to reduce copies and avoid X11

I have used these GStreamer elements with the Edge AI SDK on BeagleY-AI, but I am not sure whether the same plugin compatibility and performance is available on the Debian image.

The chunk-based processing can still be kept, with frame capture and display handled through GStreamer instead of OpenCV.

1 Like