In my experience you can use a LLM to point out typos or grammar errors, but not to actually edit or rephrase your work.
These will still fall prey to the reason that LLM summaries are bad.
So you didn’t try out this specific LLM based tool, but you extrapolate your experience from generic LLMs to judge it? To me that sounds like a hasty generalization .
I just want to genuinely now whether this specific tool might be more useful at a specific applications than generic LLMs, yet here on the lemyverse a discussion like that is impossible because AI BAD. It’s a sad and frustrating state of affairs.
Finetuning, self-hosting and whatever decentalised network they’ve got going on there aren’t going to change the core of the technology. Oh, and it’s a tiny local model (about 1/100th the size of cloud models), too, it’s going to perform poorly compared to SOTA models anyway.
The tool is not just one LLM though. It uses multiple LLMs and multiple other non-llm things.
Your argument is akin to saying:
you can’t sit and ride on a wheel, so a wheel can never be used for personal transport. And thus the natural conclusion once you understand what a wheel can do is that you can’t sit and ride in a car, so a car is also useless for personal transport.
The only person who can answer whether a tool will be useful to you is you. I understand that you tried and couldn’t use it. Was it useful to you then? Seems like no.
Broad generalizations of “X is good at Y” rarely can be accurately measured with a useful set of metrics, rarely are studied using sufficiently large sample sizes, and often discredit the edge cases where someone might find it useful or not useful despite the opposite being found generally true in the study.
And no, I haven’t tried it. It wouldn’t be good at what I need it to do: think for me.
So you didn’t try out this specific LLM based tool, but you extrapolate your experience from generic LLMs to judge it? To me that sounds like a hasty generalization .
I just want to genuinely now whether this specific tool might be more useful at a specific applications than generic LLMs, yet here on the lemyverse a discussion like that is impossible because AI BAD. It’s a sad and frustrating state of affairs.
Finetuning, self-hosting and whatever decentalised network they’ve got going on there aren’t going to change the core of the technology. Oh, and it’s a tiny local model (about 1/100th the size of cloud models), too, it’s going to perform poorly compared to SOTA models anyway.
You can use it with non local models.
What you see is the natural conclusion when one understands what llms can do at a core level without attributing any magic to it.
The tool is not just one LLM though. It uses multiple LLMs and multiple other non-llm things.
Your argument is akin to saying: you can’t sit and ride on a wheel, so a wheel can never be used for personal transport. And thus the natural conclusion once you understand what a wheel can do is that you can’t sit and ride in a car, so a car is also useless for personal transport.
The only person who can answer whether a tool will be useful to you is you. I understand that you tried and couldn’t use it. Was it useful to you then? Seems like no.
Broad generalizations of “X is good at Y” rarely can be accurately measured with a useful set of metrics, rarely are studied using sufficiently large sample sizes, and often discredit the edge cases where someone might find it useful or not useful despite the opposite being found generally true in the study.
And no, I haven’t tried it. It wouldn’t be good at what I need it to do: think for me.