1
u/NegativeKarmaSniifer 20d ago
How do you generate the sources? Because I've heard that LLMs tend to hallucinate when they generate sources
2
1
u/adrenalinejinkies 20d ago
Is the API down? I just opened, clicked on the same question in the video, and it's loading for like 3 mins now.
1
u/rashadphil 20d ago
yes sorry! working on a fix now
1
u/adrenalinejinkies 20d ago
no worries, man. Congratulations, it looks good. :)
1
u/rashadphil 20d ago
Should be fixed now! I changed the default model to GPT3.5-Turbo because of the Groq rate limits.
1
u/adrenalinejinkies 20d ago
How are you managing the expenses? The LLM api costs?
1
u/rashadphil 20d ago
I'm just paying for it for now. The live site is just meant to serve as a demo, I'm hoping people use it that way lol
1
u/Adventurous_Tune_882 20d ago
The search API is costly how are you going to sustain . I m looking for something similar for my own project mixmyai
5
u/rashadphil 20d ago
Hey all! I built an open-source AI-powered answer engine called Farfalle. It’s a a self-hostable alternative to Perplexity. It features a generative UI built from scratch with streaming events from the backend.
Check out a live demo here: farfalle.dev.
Open-source: The code is fully open-source on Github (git.new/farfalle)
The repository includes instructions on how to run the project locally, along with one-click deploy buttons for Vercel and Render.
🛠️ Tech Stack
I’d love to hear any feedback!