Not the guy you asked but I had chat gpt write up a few paragraphs about how Reddit used to care about it’s users and why it sucks now. Then used power delete suite to overwrite every post and comment with it.
Not the guy you asked but I had chat gpt write up a few paragraphs about how Reddit used to care about it’s users and why it sucks now. Then used power delete suite to overwrite every post and comment with it.
If you want to host it locally, Stirling PDF can be run in docker, and uses a library that uses Tesseract. Has a bunch of other handy PDF operations, too. I keep it around for the two times a year I need to merge, split, or decrypt PDFs.
https://github.com/Frooodle/Stirling-PDF/blob/main/HowToUseOCR.md
It can do it straight from PDF and do multiple files at a time.
TL;DR 100 Mbps down and 20 Mbps up, because cable pushed back on 100 symmetrical.
You can do the basic records via file. /etc/pihole/custom.list is a hosts formatted file for records so you don’t have to use a gui.
I had dns issues until I got my allowed ips squared away. You could try setting it to 0.0.0.0/0 if it’s not already to verify it’s not the problem.
The only reason I think they might soon if at all is because the Nintendo Switch and the shield have similar SoCs. Tegra X1 with a Maxwell GPU. Well Switch 2 is close enough to release that people are seeing it and the SoC is showing up in benchmarks. Another ARM chip, but with an Ampere GPU this time.
So if NVidia is already building the SoC for Nintendo, it may be reasonably easy to make an upgrade to the Shield.
The lines before it seem to imply you’ve run it before. If this is a new install I’d try dropping the scheme entirely and starting again.
Are you sure it’s firmware or software limited?
I assumed they just kept the lightning controller, which as you said had USB 2.0 speeds, and then hardwired a USB-C adapter into the phone/circuit board. So it’s a hardware limit.
Have you thought about adding a legend?
No other chip production in the state?
Intel has several fabs in Chandler, AZ. They have down to 10 nm there, with 5nm being their best. So there definitely is a chunk of knowledge in the state.
https://en.m.wikipedia.org/wiki/List_of_Intel_manufacturing_sites
This article states several others: https://www.chipsetc.com/semiconductor-companies-in-arizona.html
Seems like semiconductors are kind of a big deal in and around Chandler which is presumably why TSMC chose there.
I use this guy https://github.com/haugene/docker-transmission-openvpn
Open up the transmission rpc port and you’re golden. It also sets up a proxy for any other services/devices you want to run through the VPN. Supports port forwarding for PIA too.
I think on a phone it would make sense for bottom and side, which is the top of a handheld PC. That way you could have the cord sticking out whatever direction was handy for you by turning your phone.
Not in the first article.
And second just mentions it’s a possibility.
Look, I’m as against this as anyone. I think most people on Lemmy agree the big corporations have too much data on us and don’t safeguard it’s appropriately, but we don’t need to pretend articles say something they don’t.
This did not happen in the case you mentioned.
Literally from the first sentence of the first article…
“obtained her Facebook messages using a search warrant.”
Second also references a search warrant affidavit.
Funnily enough they still have expandable storage on the A54. It’s only their best models where they know they can gouge for more storage.
It doesn’t just whine that someone’s already answered the question. Oh it has been asked? Then link to the fucking answer!
My biggest pet peeve is when the first Google result for stack overflow is someone bitching at the asker to Google the question.
You’re right, I’m on mobile driving home.
I meant this one: https://github.com/oobabooga/text-generation-webui
And this YouTube video specifically for setup:
I’m a software engineer by trade, but a hobbyist when it comes to LLMs.
https://github.com/AUTOMATIC1111
This is a good place to start. Loads of YouTubers with setup videos. I like this guy https://youtube.com/@Aitrepreneur he covers LLM and Image generation too.
Hugging face is a good place to get the actual LLM.
For the bigger ones you could do it under $.50 / hr. I run llama 2 13b-8 bit on my 3090 no problem, which can be rented for $.20/hr.
Some of the lower pricing I’ve seen.
I’m guessing it’s a cuckold fetish at this point.