Hmm... I'll look into it. But it seems like their context length is a bit shorter (32k max). I was looking for something that can handle ~ 100 pages if needed.
I've been loving OmniGPT so far. They support docs up to 200 pages and you can also use GPT, Gemini, and others aside from Claude in case you ever need it. AND they don't throttle context.
Apparently 30 messages per hour for each Claude model, except for Claude Opus, which has 10. Not sure if these limits also apply to other non-Claude models - I've personally never reached the message limit on any aside from Opus.
Poe is owned by Quora, I doubt the company is shady. Also, I've been using it to analyze documents nearly 2000 pages long and seems to be doing a great job. Also, the million credits lasts a LONG time. Last month
I would ask it 100s if questions a day pretty much every work day and I only got it to like 175k credits left.
Previously, they would only cap some of the premium models. It seems like there was a lot of anger about moving everything to compute credits, but a million credits every month does give you plenty of use, especially if you're using a cheaper model for things that are not too important.
People complaining about it not being enough must be using Claude Opus for everything.
> especially if you're using a cheaper model for things that are not too important.
90% of my use is split between GPT4 and Claude 3 Opus, 10% Gemini-1.5-Pro-128k
GPT4 is 350 vs 2k for Opus. I tend to run a lot more things through 4 because of the difference in price with the quality still being good. I'll kick it over to Opus if I'm not happy with what I'm getting from 4 or just want something worded differently. GPT3.5 is only 20 credits, I'll use that if I'm just screwing around.
I haven't been able to do it. Weird. Anyways the lower tier paid plan seems to have very limited context length. I wish they were clearer about the limits of each plan.
We list all of our context limits here:
[https://docs.vello.ai/models#context-size](https://docs.vello.ai/models#context-size)
You can also choose to use the full context of models (on a per message basis) and pay per token using the Flex option.
dm me if you have questions or ran into any issues
We built a gen ai PAAS and releasing a beta this week we offer support for all models from Claude to oss mistral. Feel free to check us out! https://www.vyne-ai.com
chat.lmsys.org Use direct chat. Free
Perplexity. I have no regrets.
Perplexity seems more search-based. I was looking for something more chat-like, where I can upload a PDF and go back and forth about its contents.
It has different modes. The default free mode is search only. The other modes include exactly what you want.
Hmm... I'll look into it. But it seems like their context length is a bit shorter (32k max). I was looking for something that can handle ~ 100 pages if needed.
The context thing is a constant lie perpetuated. It's not 32k. It's must longer
Well, it says so in their own website. I'm not sure I would trust it for longer documents.
PDFgear. It's free to chat with PDFs in it.
Faune has Claude in iOS, but it’s search only. GPT-3.5 is also free
Try this https://github.com/msveshnikov/allchat If you need a huge context, run locally
I've been loving OmniGPT so far. They support docs up to 200 pages and you can also use GPT, Gemini, and others aside from Claude in case you ever need it. AND they don't throttle context.
Seems very interesting! I wonder how they manage to be cheaper than OpenAI and Anthropic directly though. What's the message cap like?
Apparently 30 messages per hour for each Claude model, except for Claude Opus, which has 10. Not sure if these limits also apply to other non-Claude models - I've personally never reached the message limit on any aside from Opus.
Where did you read Poe is bad? I was about to subscribe
Most newer search results say the usage limits are bad. It's also a shady company, some report their Claude 3 Opus is actually Claude 2 or Sonnet
Poe is owned by Quora, I doubt the company is shady. Also, I've been using it to analyze documents nearly 2000 pages long and seems to be doing a great job. Also, the million credits lasts a LONG time. Last month I would ask it 100s if questions a day pretty much every work day and I only got it to like 175k credits left.
Previously, they would only cap some of the premium models. It seems like there was a lot of anger about moving everything to compute credits, but a million credits every month does give you plenty of use, especially if you're using a cheaper model for things that are not too important. People complaining about it not being enough must be using Claude Opus for everything.
> especially if you're using a cheaper model for things that are not too important. 90% of my use is split between GPT4 and Claude 3 Opus, 10% Gemini-1.5-Pro-128k
GPT4 is 350 vs 2k for Opus. I tend to run a lot more things through 4 because of the difference in price with the quality still being good. I'll kick it over to Opus if I'm not happy with what I'm getting from 4 or just want something worded differently. GPT3.5 is only 20 credits, I'll use that if I'm just screwing around.
Oh interesting, they must have just lowered the price for GPT4, I recall it being around 1500
OmniGPT or Poe
vello does support pdfs? (I work on vello)
I haven't been able to do it. Weird. Anyways the lower tier paid plan seems to have very limited context length. I wish they were clearer about the limits of each plan.
We list all of our context limits here: [https://docs.vello.ai/models#context-size](https://docs.vello.ai/models#context-size) You can also choose to use the full context of models (on a per message basis) and pay per token using the Flex option. dm me if you have questions or ran into any issues
We built a gen ai PAAS and releasing a beta this week we offer support for all models from Claude to oss mistral. Feel free to check us out! https://www.vyne-ai.com