BeagleMind

There’s no need to include code in your proposal. Instead, add design details, support your design with diagrams, and explain why you chose a specific approach over other possible alternatives.

You can also think towards giving results in two iterations.

1st iteration → Focus on the BeagleBoard docs site only, using a purely RAG-based approach.
You can discuss with the community whether they would like an ai assistant in the docs site itself. Example: ESP32-C6 Series SoC Errata - ESP32-C6 - — ESP Chip Errata latest documentation
This will take you 3-4 weeks.

2nd iteration → Implement a fine-tuned model with an in-memory database that includes more repositories, as mentioned by Jason here. However, we won’t be able to leverage the EdgeAI capabilities of the BeagleBoard for this model, as the MMA accelerators and DSPs are only compatible with certain layers, and to my knowledge, transformers are not among them.

You can try running the model on the CPU only, but you’ll need to evaluate whether the time to the first token, latency, and throughput are efficient enough for local execution. If not, hosting the model externally would be the better option.

At the end what you want to propose in your proposal is upto you but it should be supported with why not the other possible approach.

2 Likes

Hello @Aryan_Nanda , just a small update, I added the gifs to the repo providing a clearer perspective on how the CLI should work. And also, yes, I took into consideration the points you mentionned and enhanced my proposal based on them, also tried to suggest and provide a wider range of possibilities to the idea, how to technically implement it and how can we consume the hosted LLM eventually, I hope you like it, thank you for your time, well appreciated!

1 Like