There is a cool self-hosted version of Perplexity out there now, called Perplexica. It can be configured to use Ollama (local inferencing) and your own, self-hosted SearXNG instance to do the actual search and collation.
I have been using it for a week and it really works.
There is a cool self-hosted version of Perplexity out there now, called
Perplexica
. It can be configured to use Ollama (local inferencing) and your own, self-hosted SearXNG instance to do the actual search and collation.I have been using it for a week and it really works.
I’m sorry, but the problems with modern-day LLMs and GenAI run far deeper than “who hosts it.”
If you say so. I’m just trying to be helpful instead of offering scare quotes.