Stories Worth Sharing:5-Day Gen AI Intensive Course with Google
For our Kaggle Capstone, we challenged ourselves to build something real: a chatbot that could answer FAQs for our own company using GenAI tools. The goal? Create a working prototype, learn as much as possible, and explore how powerful and accessible today’s AI really is.

Why This Project?
Every business deals with repetitive questions. A static FAQ page helps, but it’s not always user-friendly. We wanted something better — a conversational assistant that could understand and respond naturally.
The Tech Behind the Bot
Our project aimed to create exactly that. Here’s a breakdown of what the system does:
- Knowledge Base: A list of 40 company-specific Q&A pairs.
- Embeddings: We used Google’s
embedding-001model to convert these into vector representations — the backbone for finding meaning in text. - User Input: A chat interface (built with
ipywidgets) lets users ask questions. - Semantic Search: User queries are embedded and compared to the FAQ list using cosine similarity to find the closest match.
- RAG (Retrieval-Augmented Generation): If a match is strong enough, Gemini 1.5 Flash generates a natural-sounding response based on the relevant FAQ.
- Fallback: No good match? The bot politely admits it doesn’t have the info and suggests speaking to a human.
All of this runs inside a Kaggle Notebook, using the google.generativeai library, with help from ChatGPT for brainstorming and code cleanup.
The Code (A Glimpse)
The entire project lives within a Kaggle Notebook.
The Python code uses the google.generativeai library to interact with the Gemini API for both embedding creation and answer generation. Key parts include:
- A cosine_similarity function (using numpy) to compare embeddings.
- A core find_best_match function to perform the vector search.
- Separate LLM prompting functions: one for generating answers from context (RAG) and another for the "low confidence" fallback response.
- The main smart_faq_bot_gemini function orchestrating the process: embedding the query, finding the match, deciding whether to use RAG or the fallback based on the similarity score.
- A simple interactive UI using
ipywidgetsfor demonstration within the notebook. But that is rather for the Demo purpose and has improvement potential.
Honest Reflections: Successes and Shortcomings
Now, for some real talk:
Can the code be improved? Absolutely. It’s functional but could be more robust, modular, and optimized. Error handling is present but could be more sophisticated. Using a proper vector store would be better than a simple list for larger datasets.
Is the model usage economically wise? Probably not in its current state. Each query involves embedding calls and a generation call. Optimization (e.g., caching, potentially cheaper models, prompt tuning) would be crucial for real-world deployment.
Does it solve the real-world problem? It demonstrates a proof-of-concept and works for the defined 40 FAQs. However, for Peak Pioneers’ actual use, it would need significant refinement, more comprehensive data, better evaluation, and integration capabilities. It’s a starting point, not a finished product.
Does this project show GenAI empowers ordinary people?
YES. 100% YES.
This is the most crucial takeaway for me. Without AI assistance (especially Gemini for coding and ChatGPT for ideas), I, as someone who isn't a deep AI/ML developer, could not easily have built something like this. It democratizes development in a way I haven't experienced before.
Open Questions?
Let us know if you desire any additional info about the project or the concept. We are happy to share more details.
Contact us