

Just ask if you can shake her hand or something


Just ask if you can shake her hand or something


I started learning when I was 9. I think to some extent it was easier back then in the 80ies because computers were relatively simple machines. On the other hand I also had to learn English at the same time to be able to read manuals and programming books etc. So I think it must be possible because even if I saw the word “syntax” I doubt I had a full grasp of what it means.


Leute die den als Hoffnungsträger für irgendwas gesehen haben sind sowieso verloren…


At the end of the day this is just another right wing conservative politician in the same right wing conservative party that’s been ruling Japan for almost the entire time since 1955.


Tbf the company doesn’t seem to spell out jialichuang or printed circuit board on their web site either, so maybe the author didn’t know.


On Mac:
If you want an icon you can double click on your desktop, you can put you command in a file with the extension “.command” and mark it as executable. Double clicking it will run the content as a shell script in Terminal.
If you want something that can be put into the Dock, use the Script Editor application that comes with macOS to create a new AppleScript script. Type do shell script "<firefox command here>" then find Export in the menu. Instead of Script, choose export to Application and check Run Only. This will give you an application you can put in the Dock.
If you want to use Shortcuts, you can use the Run Shell Script action in Shortcuts too.
Finally, if you want something that opens multiple firefoxes at once, chain multiple firefox invocations together on one line separated by an ampersand. There is an option you have to use (–new-instance I think?) to make Firefox actually start a complete new instance.


That’s funny because I grew up with math teachers constantly telling us that we shouldn’t trust them.
Normal calculators that don’t have arbitrary precision have all the same problems you get when you use floating point types in a programming language. E.g. 0.1+0.2==0.3 evaluates to false in many languages. Or how adding very small numbers to very large numbers might result in the larger number as is.
If you’ve only used CAS calculators or similar you might not have seen these too since those often do arbitrary precision arithmetics, but the vast majority of calculators is not like that. They might have more precision than a 32 bit float though.


I mean, most calculators are wrong quite often


What bothers me the most is the amount of tech debt it adds by using outdated approaches.
For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.
This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.
The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.
It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.
I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.
I have to do a bunch of relatively unsurmountable steps to do what should’ve taken half a minute. Like screenshot the profile and scrape the text with iOS Photos text recognition.
The iOS workaround isn’t quite as unsurmountable as you don’t have to go through the Photos app at all. You can enter text selection mode directly from the screenshot without even saving it or leaving the app you’re in. And since iOS will look up any word you can select in the system dictionary and also translate any text you can select, you can do these things right there too.
That said I did once make a shortcut that lets me triple tap the back of my phone to pop up a text version of everything on screen that the iOS OCR detects. Not sure what I did that for though, I don’t really use it.


Well it’s not improving my productivity, and it does mostly slow me down, but it’s kind of entertaining to watch sometimes. Just can’t waste time on trying to make it do anything complicated because that never goes well.
Tbh I’m mostly trying to use the AI tools my employer allows because it’s not actually necessary for me to believe that they’re helping. It’s good enough if the management thinks I’m more productive. They don’t understand what I’m doing anyway but if this gives them a warm fuzzy feeling because they think they’re getting more out of my salary, why not play along a little.


What gets me is that even the traditional business models for LLMs are not great. Like translation, grammar checking, etc. Those existed before the boom really started. DeepL has been around for almost a decade and their services are working reasonably well and they’re still not profitable.


As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…
To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.


Naja die wesentliche Innovation in der Antriebstechnik war dass Elektroautos jetzt benutzbar sind.


I played it on Steam Deck and it was fine. And the Switch 2 is more powerful than that, although it also has a much higher display resolution.
I tried it and it’s way off for me because it gives too much weight to submitted posts. I don’t have very many submissions so even when I selected recent only, it focused on one guide post for a game I wrote many years ago and made the profile 80% about that. But I guess that’s a problem at some point before the LLM is involved. There are some other similarly non-LLM problems too like making the most used terms section list almost only subreddit names.
When I limited it to recent comments only it did a better job. It even listed “Humanity’s general incompetence” as the fifth of my “top 3” topics.


Sometimes mandatory web proxies still allow direct connections to port 443 so as to not break https, which in return means as long as your connection is to port 443, that proxy will pass it through without interfering.
I used to run sshd on port 443 for this reason back when I regularly had to work from client networks.


Finde ich ehrlich gesagt zu einfach gedacht. Da gehört schon mehr dazu.
So this is not entirely arbitrary, and probably part of it is also that they’re not just looking at the progress, but also at systemic issues.
For example we know that larger models with more training material are more powerful. That’s probably the biggest contributing factor to the insane pace at which they’ve developed. But we’re also at a point where AI companies are saying they are running out of data. The models we have now are already trained on basically the entire open internet and a lot of non-public data too. Therefore we can’t expect their capabilities to scale with more data unless we find ways to get humans to generate more data. At the same time the quality of data on the open internet decreases because more of it is generated by AI.
On the other hand, making them larger also has physical requirements, most of all power. We are already at a point where AI companies are buying nuclear power plants for their data centers. So scaling in this way is close to the limit too. Building new nuclear power plants takes ages.
Another different thing is that LLMs can’t learn. They don’t have to be able to learn to be useful, obviously we can use the current ones just fine at least for some tasks. But nonetheless this is something that limits the progress that’s possible for them.
And then there is the entire AI bubble thing. The economical side of things, where we have an entire circular economy based on the idea that companies like OpenAI can spend billions on data centers. But they are losing money. Pretty much none of the AI companies are profitable other than the ones that only provide the infrastructure. Right now investors are scared enough to miss out on AGI to continue investing but if they stopped, it would be over.
And all this is super fragile. The current big players are all using the same approach. If one company makes that next step and finds a better approach than transformer LLMs, the others are toast. Or if some Chinese company makes a breakthrough with energy usage again. Or if there is a hardware breakthrough and the incentive to pay for hosted LLMs goes away. Basically even progress can pop the bubble because if we can all run AI that does a good enough job at home then the AI companies will never hit their revenue targets. And then the investment stops and companies that bleed billions every quarter without investors backing them can die very quickly.
Personally I don’t think they will stop becoming better right now. Even if they do stop, I’m not convinced we understand them well enough to be unable to improve the ways in which we use them a bit more. But when people say that this is the peak, they’re looking at the bigger picture. They say that LLMs can’t get closer to human intelligence because fundamentally, we don’t have a way to make them learn, they say that the development model is not sustainable, and other reasons like that.