• 0 Posts
  • 82 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle





  • Die Art Erklärung kommt ja jetzt öfter, aber ich finde das ist etwas zu schnell geschossen. Also, natürlich nicht falsch, aber zu sehr auf einen Aspekt eingeschossen.

    Erstens, der Carnot-Wirkungsgrad gilt wie üblich in der Physik für stark idealisierte Systeme die überhaupt nichts anderes tun. Also keine Interaktion mit der Umgebung etc. Da wären dann auch 80%-90% denkbar, aber in der Realität ist eine Kuh kein masseloser Punkt im Vakuum und reale Motoren haben entsprechend ca. 40%. Die sind also nicht von Zweiten Hauptsatz der Thermodynamik begrenzt sondern vorher noch erstmal von ganz anderen Dingen.

    Zweitens weiß ich nicht, wieso wir bei “effizient” überhaupt zuerst an den Wirkungsgrad des Motors denken. Klar kann man Autos effizienter, billiger und umweltfreundlicher machen indem man den Wirkungsgrad erhöht und dadurch mit weniger Treibstoff die gleiche Arbeit verrichtet. In der Realität haben wir Autos aber zu einem großen Teil mit besserer Aerodynamik effizienter gemacht. Und die hat mit dem Wirkungsgrad des Motors nichts zu tun. Oder mit der Art des Motors ansich.

    Auch energiedichtere Treibstoffe wären denkbar, oder leichtere Materialien, usw. In der Realität hat sich das halt nicht rechtzeitig ergeben bevor die Elektroautos überholt haben, aber am Wirkungsgrad des Motors alleine hing das auf jeden Fall nicht.

    Wenn der Wirkungsgrad des Motors alles entscheiden würde, könnten wir ja bei den Elektroautos jetzt aufgeben. Da ist der Wirkungsgrad ja schon hoch und mehr als 100% geht nicht, dementsprechend ist da eh nicht mehr viel zu holen. Das ist aber natürlich Quatsch, weil der Wirkungsgrad in der Realität eben nur einer von vielen Faktoren ist und nicht mal wirklich der wichtigste.

    Wo der Wirkungsgrad dagegen wirklich ein Totschlagargument ist, sind die E-Fuels. Da haben wir einen direkten Vergleich was ein Elektroauto mit X Menge umweltfreundlich erzeugten Strom an Arbeit leistet, vs den Verbrenner mit Treibstoff der mit der selben Menge Strom hergestellt wurde. Weil da nicht nur der Wirkungsgrad vom Motor schlecht ist, sondern auch bei der Produktion des Treibstoffs der Wirkungsgrad schlecht ist, fällt der Vergleich natürlich katastrophal aus zu Ungunsten der E-Fuels.


  • I had some video glasses ages ago that could do that too. Like 15 years ago. I can’t recall a single game without problems. UI was the biggest issue. Often UI elements were at nonsensical 3D positions, and while you wouldn’t notice this on a normal screen, the glasses tried to render them in the center of my brain…

    And before that I had an nVidia graphics card in the late 90ies that came with shutter glasses. The driver could do stereo for “everything” too, however for me “everything” was one game where I could get it to work.






  • I don’t understand how people can look at the insane progress gpt has made in the last 3 years and just arbitrarily decide that this is its maximum capability.

    So this is not entirely arbitrary, and probably part of it is also that they’re not just looking at the progress, but also at systemic issues.

    For example we know that larger models with more training material are more powerful. That’s probably the biggest contributing factor to the insane pace at which they’ve developed. But we’re also at a point where AI companies are saying they are running out of data. The models we have now are already trained on basically the entire open internet and a lot of non-public data too. Therefore we can’t expect their capabilities to scale with more data unless we find ways to get humans to generate more data. At the same time the quality of data on the open internet decreases because more of it is generated by AI.

    On the other hand, making them larger also has physical requirements, most of all power. We are already at a point where AI companies are buying nuclear power plants for their data centers. So scaling in this way is close to the limit too. Building new nuclear power plants takes ages.

    Another different thing is that LLMs can’t learn. They don’t have to be able to learn to be useful, obviously we can use the current ones just fine at least for some tasks. But nonetheless this is something that limits the progress that’s possible for them.

    And then there is the entire AI bubble thing. The economical side of things, where we have an entire circular economy based on the idea that companies like OpenAI can spend billions on data centers. But they are losing money. Pretty much none of the AI companies are profitable other than the ones that only provide the infrastructure. Right now investors are scared enough to miss out on AGI to continue investing but if they stopped, it would be over.

    And all this is super fragile. The current big players are all using the same approach. If one company makes that next step and finds a better approach than transformer LLMs, the others are toast. Or if some Chinese company makes a breakthrough with energy usage again. Or if there is a hardware breakthrough and the incentive to pay for hosted LLMs goes away. Basically even progress can pop the bubble because if we can all run AI that does a good enough job at home then the AI companies will never hit their revenue targets. And then the investment stops and companies that bleed billions every quarter without investors backing them can die very quickly.

    Personally I don’t think they will stop becoming better right now. Even if they do stop, I’m not convinced we understand them well enough to be unable to improve the ways in which we use them a bit more. But when people say that this is the peak, they’re looking at the bigger picture. They say that LLMs can’t get closer to human intelligence because fundamentally, we don’t have a way to make them learn, they say that the development model is not sustainable, and other reasons like that.








  • On Mac:

    If you want an icon you can double click on your desktop, you can put you command in a file with the extension “.command” and mark it as executable. Double clicking it will run the content as a shell script in Terminal.

    If you want something that can be put into the Dock, use the Script Editor application that comes with macOS to create a new AppleScript script. Type do shell script "<firefox command here>" then find Export in the menu. Instead of Script, choose export to Application and check Run Only. This will give you an application you can put in the Dock.

    If you want to use Shortcuts, you can use the Run Shell Script action in Shortcuts too.

    Finally, if you want something that opens multiple firefoxes at once, chain multiple firefox invocations together on one line separated by an ampersand. There is an option you have to use (–new-instance I think?) to make Firefox actually start a complete new instance.


  • That’s funny because I grew up with math teachers constantly telling us that we shouldn’t trust them.

    Normal calculators that don’t have arbitrary precision have all the same problems you get when you use floating point types in a programming language. E.g. 0.1+0.2==0.3 evaluates to false in many languages. Or how adding very small numbers to very large numbers might result in the larger number as is.

    If you’ve only used CAS calculators or similar you might not have seen these too since those often do arbitrary precision arithmetics, but the vast majority of calculators is not like that. They might have more precision than a 32 bit float though.