

But bara roligt is Swedish


But bara roligt is Swedish


They could do what Apple did when they replaced the old MacOS with UNIX, which is they shipped an emulator for a while that was integrated really well. They also had a sort of backwards compatible API that made porting apps a bit easier (now removed, it died with 32 bit support).
But in the Windows world, third party drivers are much more important. So in that regard it would be more difficult. Especially if they’re not fully behind it. As soon as they waver and there is some way to keep using traditional Windows, the result will be the same as when they tried to slim down the Windows API on ARM, and then nobody moved away from the APIs that were removed because they still worked on x86, which significantly slowed adoption for Windows on ARM.


It depends on the task. As an extreme example, I can get AI to create a complete application in a language I don’t know. There’s no way that’s not more productive than me first learning the language to a point where I can make apps in it. Just have to pick something simple enough for the AI.
Of course the opposite extreme also exists. I’ve found that when I demand something impossible, AI will often just try to implement it anyway. It can easily get into an endless cycle where it keeps optimistically declaring that it identified the issue and fixed it with a small change, over and over again. This includes cases where there’s a bug in the underlying OS or similar. You can waste a huge amount of time going down an entirely wrong path if you don’t realize that an idea doesn’t work.
In my real work neither of these really happen. So the actual impact is much less. A lot of my work is not coding in the first place. And I’ve been writing code since I was a little kid, for almost 40 years now. So even the fast scaffolding I can do with AI is not that exciting. I can do that pretty quickly without AI too. When AI coding tools appeared my bosses started asking if I was fast because I was using one. No, I’m fast because some people ask for a new demo every week. Causes the same problems later too.
But I also do think that we all still need to learn how to use AI properly. This applies to all tools, but I think it’s more difficult than with other tools. If I try to use a hammer on something other than a nail, it will not enthusiastically tell me it can do it with just one more small change. AI tools absolutely will though, and it’s easy to just let them try because it’s just a few seconds to see what they come up with. But that’s a trap that leads to those productivity wasting spirals. Especially if the result actually somehow still works at first, so we have to fix it half a year later instead of right away.
At my work there are some other things that I feel limit the productivity potential of AI tools. First of all we’re only allowed to use a very limited number of tools, some of them made in-house. Then we’re not really allowed to integrate them into our workflows other than the part where we write code. E.g. I could trivially write an mcp server that interacts with our (custom in-house) ci system and actually increases my productivity because I could save a small number of seconds very often if I could tell an AI to find builds for me for integration or QA work. But it’s not allowed. We’re all being pushed to use AI but the company makes it really difficult at the same time.
So when I play around with AI on my spare time I do actually feel like I’m getting a huge boost. Not just because I can use a claude model instead of the ones I can use at work, but also just basic things like e.g. being able to turn on AI in Xcode at all when working on software for Apple platforms. On my work Macbook I can’t turn on any Apple AI features at all so even tab completion is worse. Or in other words, those realities of working on serious projects at a serious company with serious security policies can also kill any potential productivity boost from AI. They basically expect us to be productive with only those features the non-developer CEO likes, who also doesn’t have to follow any of our development processes…


Greenland can’t really shoot anything down because they have no military of their own.


Mett did mean meat at some point in time before it came to refer to minced meat specifically. I think Mettwurst isn’t old enough for the “meat” meaning though, and also the author missed a chance here by not going for the even older meaning of just “food” (applies to English meat too) and claiming that we differentiate between edible sausages like Mettwurst and inedible ones like Kackwurst.


I’ve been programming as a hobby since I was 9. It’s also my job so I rarely finish the hobby projects anymore, but still.
On my first computer (Apple II) I was able to make a complete game as a kid that I felt was comparable to some of the commercial ones we had.
In the 1990ies I was just a teenager busy with school but I could make software that was competitive with paid products. Published some things via magazines.
In the late 90ies I made web sites with a few friends from school. Made a lot of money in teenager terms. Huge head start for university.
In the 2000s for the first time I felt that I couldn’t get anywhere close to commercial games anymore. I’m good at programming but pretty much only at that. My art skills are still on the same level as when I was a kid. Last time I used my own hand drawn art professionally was in 2007.
Games continued becoming more and more complex. They now often have incredibly detailed 3D worlds or at least an insane amount of pixel art. Big games have huge custom sound tracks. I can’t do any of that. My graphics tablets and my piano are collecting dust.
In 2025 AI would theoretically give me options again. It can cover some of my weak areas. But people hate it, so there’s no point. Indy developers now require large teams to count as indy (according to this award); for a single person it’s difficult especially with limited time.
It’d be nice if the ethical issues could be fixed though. There are image models trained on proprietary data only, music models will get there too because of some recent legal settlements, but it’s not enough yet.


That’s what I do. I have an LG OLED from 6-7 years ago and I have no idea what the UI looks like. But to be fair this is only because I don’t watch traditional TV at all. It’s just an Apple TV for most streaming services and a Mac Mini for some other things like adblocked youtube (with one of those cheap gyro mouse and keyboard bluetooth remotes). I guess I wouldn’t have to use the satellite TV though, I could get iptv via my fibre isp too, but that’d cost money.
The Mac is not good at supporting CEC other than switching source when it wakes up, but even that’s not an issue because I can still use the Apple TV remote to control volume even when something else is the active source. Speaking of volume, my setup also includes a Samsung sound bar which also has a remote that I never actually have to use. Everything mostly just works.


The EU forced Apple to allow other rendering engines, but implementing one costs money vs just using WebKit for free, so nobody does it.


very few who even touch AI for anything aside from docs or stats
Not even translation? That’s probably the biggest browser AI feature.


The real ugly Optimus is a bunch of StreamDecks next to each other


Since sugar is bad for you, I used organic maple syrup instead and it works just as well


Ein Schnitzel mit Panade dürfte demnach nicht als Schnitzel bezeichnet werden, so das Gutachten.
Da war ja mal wieder die geballte Kompetenz am Werk.


A Chinese university trained GLM
A startup spun out by a university (z.ai). Their business model is similar to what everybody else does, they host their models and sell access while trying to undercut each other. And like others they raised billions in funding from investors to be able to do this.


But also they are just tuning and packaging a publicly available model, not creating their own.
So they can be profitable because the cost of creating that model isn’t factored in, and if people stop throwing money at LLMs and stop releasing models for free, there goes their business model. So this is not really sustainable either.


Die Art Erklärung kommt ja jetzt öfter, aber ich finde das ist etwas zu schnell geschossen. Also, natürlich nicht falsch, aber zu sehr auf einen Aspekt eingeschossen.
Erstens, der Carnot-Wirkungsgrad gilt wie üblich in der Physik für stark idealisierte Systeme die überhaupt nichts anderes tun. Also keine Interaktion mit der Umgebung etc. Da wären dann auch 80%-90% denkbar, aber in der Realität ist eine Kuh kein masseloser Punkt im Vakuum und reale Motoren haben entsprechend ca. 40%. Die sind also nicht von Zweiten Hauptsatz der Thermodynamik begrenzt sondern vorher noch erstmal von ganz anderen Dingen.
Zweitens weiß ich nicht, wieso wir bei “effizient” überhaupt zuerst an den Wirkungsgrad des Motors denken. Klar kann man Autos effizienter, billiger und umweltfreundlicher machen indem man den Wirkungsgrad erhöht und dadurch mit weniger Treibstoff die gleiche Arbeit verrichtet. In der Realität haben wir Autos aber zu einem großen Teil mit besserer Aerodynamik effizienter gemacht. Und die hat mit dem Wirkungsgrad des Motors nichts zu tun. Oder mit der Art des Motors ansich.
Auch energiedichtere Treibstoffe wären denkbar, oder leichtere Materialien, usw. In der Realität hat sich das halt nicht rechtzeitig ergeben bevor die Elektroautos überholt haben, aber am Wirkungsgrad des Motors alleine hing das auf jeden Fall nicht.
Wenn der Wirkungsgrad des Motors alles entscheiden würde, könnten wir ja bei den Elektroautos jetzt aufgeben. Da ist der Wirkungsgrad ja schon hoch und mehr als 100% geht nicht, dementsprechend ist da eh nicht mehr viel zu holen. Das ist aber natürlich Quatsch, weil der Wirkungsgrad in der Realität eben nur einer von vielen Faktoren ist und nicht mal wirklich der wichtigste.
Wo der Wirkungsgrad dagegen wirklich ein Totschlagargument ist, sind die E-Fuels. Da haben wir einen direkten Vergleich was ein Elektroauto mit X Menge umweltfreundlich erzeugten Strom an Arbeit leistet, vs den Verbrenner mit Treibstoff der mit der selben Menge Strom hergestellt wurde. Weil da nicht nur der Wirkungsgrad vom Motor schlecht ist, sondern auch bei der Produktion des Treibstoffs der Wirkungsgrad schlecht ist, fällt der Vergleich natürlich katastrophal aus zu Ungunsten der E-Fuels.


I had some video glasses ages ago that could do that too. Like 15 years ago. I can’t recall a single game without problems. UI was the biggest issue. Often UI elements were at nonsensical 3D positions, and while you wouldn’t notice this on a normal screen, the glasses tried to render them in the center of my brain…
And before that I had an nVidia graphics card in the late 90ies that came with shutter glasses. The driver could do stereo for “everything” too, however for me “everything” was one game where I could get it to work.


It’s popular because it appears in the opening of the anime Slam Dunk, which plays in the Shonan area.
No I haven’t watched the anime, I found out about it a couple years back when I was confused why there were so many people there.


The design was selected after the city government invited public submissions to promote Kamakura.
…
“We decided on the suspension because some residents thought the design helped attract visitors and found this unpleasant,” a member of the city’s tax division said.
I want to post the surprised pikachu meme but I actually have severe doubts that this is what’s attracting the tourists taking pictures at the crossing…


We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.
The video conferencing platform my work uses works well because it’s a large well-known platform and they punched holes for it into the firewall and the vpn. Not really something a service provider can just replicate.