• 1 Post
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • This is about thew new starter cost.

    When a developer joins a team, they will not be as productive as they have to learn the code, frameworks, libraries, the project purpose, the tooling, etc… Often this impacts other members of the team lowering the entire teams productivity.

    When you use productivity tracking (e.g. things like capacity planning) you will see the teams performance drop and it will take time for it to exceed the previous measured performance. This is the cost of adding a new starter.

    So if it takes 6 weeks for a new starter to increase overall team producitivty then planning someone on a project for 4 weeks is pointless since the team will have a higher delivery rate without the extra person. This is typically why an organsation loses its ability to migrate staff between projects.

    Code formating affects the layout of the code and our brains do all sorts of tricks around pattern recognition, so if your code formatting rules are too different a someone migrating between projects has to spend time looking for code and retraining their brain.

    Its an additional barrier and a one within an organisations skills to remove (by forcing a common code standard).



  • Python is unique in formatting forms part of the syntax, every language has linters but its far more common for orgs to tweak the default rules .

    For example Java has Checkstyle. The default rules ‘sun checks’ give a line length of 80, tabs are 4 spaces and everything is placed on a new line.

    Junior devs inevitably want to trash the line length (honestly on 1080p monitors, 120 makes sense,).

    There is always a new line/same line discussion (everyone perfers same line but there is always one die hard new line person).

    The tab width discussion always has one junior dev complain that “tabs are better”, as someone who started development on Visual Studio 6 where half the team double spaced, the other half used tabs. Those people get a lecture from me on how we can convert tabs to spaces but not the inverse so it will always be spaces if I am near.

    With Checkstyle you upload the rule file as an artifact into your M2 repository. Then you can pull it down as a dependency when the checkstyle plugin runs.


  • As someone who bought Half Life 2 when it was released …

    I only remember people being excited about Steam, Web stores weren’t a thing back then and they were the future! (It was the following years of audio and ebook stores locking stuff down and evapourating that taught us to hate it).

    Game/Audio CD DRM hacking the kernel and breaking/massively slowing down your PC was pretty common back then and Steam’ s DRM didn’t do that.

    The HL2 disc installer didn’t require you to install Steam, once installed it asked you to setup Steam and there was a sticker under the DVD with the Steam code for you to enter.

    You were then rewarded with a copy of HL2 Deathmatch and Counterstrike Source.

    Steam wasn’t always on DRM, back then ADSL/DSL was relatively new and alot of people were still stuck on Dial Up modems.

    Steam let you sign in and authorize your games for 30 days at which point you would need to log into Steam again. This was incredibly helpful feature for young me.


  • stevecrox@kbin.socialtoGames@lemmy.worldWhat's up with Epic Games?
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    6 months ago

    Basically Epic like every other publisher has created their own launcher/store.

    They aren’t trying to compete on features and instead using profits from their franchise to buy market share (e.g. buying store exclusives).

    The tone and strategy often comes off as aggressive and hostile.

    For example Valve was concerned Microsoft were going to leverage their store to kill Steam. Valve has invested alot in adding windows operability to Linux and ensuring Linux is a good gaming platform. To them this is the hedge against agressive Microsoft business practices.

    The Epic CEO thinks Windows is the only operating system and actively prevents Linux support and revoked Linux support from properties they bought.

    As a linux user, Valve will keep getting my money and I literally can’t give it to Epic because they don’t want it.


  • I avoid any company that requires a software test before the interview.

    I worked for a company that introduced them after I joined, I collected evidence all of the companies top performers wouldn’t have joined since we all had multiple offers and having to do the test would put people off applying. The scores from it didn’t correlate with interview results so it was being ignored by everyone. Still took 2 years to get rid of it.

    The best place used STAR (Situation Task Action Result) based interviews. The goal was to ask questions until you got 2 stars.

    I thought these were great because it was more varied and conversational but there was a comparable consistency accross interviewers.

    You would inevitably get references to past work and you switch to asking a few questions about that. Since it was around a situation you would get more complete technical explanations (e.g. on that project I wrote an X and Y was really challenging because of Z).

    I loved asking “Tell me about something your really proud off”. Even a nervous junior would start opening up after that question.

    After an hour interview you would end up with enough information you could compare them against the company gradings (junior, senior, etc…).

    This was important because it changed the attitude of the interview. It wasn’t a case of if the candidate would be a good senior dev for project X, but an assessment of the candidate. If they came out as a lead and we had a lead role, lets offer them that.


  • If you signup to social media it will pester you for your email contacts, location and hobbies/interests.

    Building a signup wizard to use that information to select a instance would seemto be the best approach.

    The contacts would let you know what instance most of your friends are located (e.g. look up email addresses).

    Topic specific instance, can provide a hobby/interests selection section.

    Lastly the location would let you choose a country specific general instance.

    It would help push decentralisation but instead of providing choice your asking questions the user is used to being asked.


  • Nvidia drivers don’t tend to be as performant under linux.

    With AMD instead of using the AMD VLK driver, you would use the RADV (developed largely by valve). Which petforms better.

    Every AMD card under linux supports OpenCL (the driver is more based on graphics card architecture) and you install it very easily. Googling it with windows found pages of errors and missing support.

    Blender supports OpenCL. I bet the 2x improvement is Blender being able to ofload rendering to the AMD graphics card.

    Also this represents the biggest headache in Linux, lots of gamers insist they can only use Nvidia cards. Nvidia treats linux as an afterthought as best or deliberately sabotages things at worse.

    AMD embraced open source and so Linux land is much nicer on AMD (and to a less extent Intel).

    The results here will probably be a DxVK quirk, lots of “Nvidia optimised” games have game engines doing weird things and the Nvidia driver compensates. DxVK has been identifying that to produce “good” vulkan calls.



  • I am actually arguing for a stable ABI.

    The few times I have had to compile out of tree drivers for the linux kernel its usually failed because the ABI has changed.

    Each time I have looked into it, I found code churn, e.g. changing an enum to a char (or the other way) or messing with the parameter order.

    If I was empire of the world, the linux kernel would be built using conan.io, with device trees pulling down drivers as dependencies.

    The Linux ABI Headers would move out into their own seperately managed project. Which is released and managed at its own rate. Subsystem maintainers would have to raise pull requests to change the ABI and changing a parameter from enum to char because you prefer chars wouldn’t be good enough.

    Each subsystem would be its own “project” and with a logical repository structure (e.g. intel and amd gpu drivers don’t share code so why would they be in the same repo?) And built against the appropriate ABI version with each repository released at its own rate.

    Unsupported drivers would then be forked into their own repositories. This simplifies depreciation since its external to the supported drivers and doesn’t need to be refactored or maintained. If distributions can build them and want to include the driver they can.

    Linus job would be to maintain the core kernel, device trees and ABI projects and provide a bill of materials for a selection of linux kernel/abi/drivers version which are supported.

    Lastly since every driver is a descrete buildable component, it would make it far easier for distributions to check if the driver is compatible (e.g. change a dependency version and build) with the kernel ABI they are using and provide new drivers with the build.

    None of this will ever happen. C/C++ developers loath dependency management and people can ve stringly attached to mono repos for some reason.


  • The linux kernel is very old school in how it is run and originally a big part of the DevSecOps movement was removing a lot of manual overhead.

    Moving on to something like Gitea (codeberg) would give you a better diff view and is quicker/easier than posting a patch to a mailing list.

    The branching model of the kernel is something people write up on paper that looks great (much like Gitflow) but is really time consuming to manage. Moving to feature branch workflow and creating a release branches as part of the release process allows a ton of things to be automated and simplified.

    Similarly file systems aren’t really device specific, so you could build system tests for them for benchmarking and standard use cases.

    Setting up a CI to perform smoke testing and linting, is fairly standard.

    Its really easy to setup a CI to trigger when a new branch/pr is created/updated, this means review becomes reduced to checking business logic which makes reviews really quick and easy.

    Similarly moving on to a decent issue tracker, Jira’s support for Epic’s/stories/tasks/capabilities and its linking ability is a huge simplifier for long term planning.

    You can do things like define OKR’s and then attach Epics to them and Stories/tasks to epics which lets you track progress to goals.

    You can use issues the way the linux community currently uses mailing lists.

    Combined with a Kanban board for tracking, progress of tickets. You remove a ton of pain.

    Although open source issue trackers are missing the key productivity enablers of Jira, which makes these improvements hard to realise.

    The issue is people, the linux kernel maintainers have been working one way for decades. Getting them to adopt new tools will be heavily resisted, same with changing how they work.

    Its like everyone outside, knows a breaking the ABI definition from the sub system implementation would create a far more stable ABI which would solve a bunch of issues and allow change when needed, except no one in the kernel will entertain the idea.


  • Maven has unit and integration test phases and there are a multitude of plugins designed to hook into those phases but there are constraints by design.

    Trying to hook everything into the build management system is a source of technical debt, your using a tool for something it wasn’t designed.

    I would look at what makes sense within the build management system and what makes sense in a CI pipeline.

    CI tools have different DSL and usually provide a means to manage environments. Certain integration and system level tests are best performed there.

    For instance I keep system tests as a seperate managed project. The project can be executed from developer machines for local builds but I also create a small build pipeline to build the project, deploy it and run the system tests against it triggered by pull requests.

    This is why I say the build management system doesn’t really change, because you should treat everything as descrete standalone components.

    The Parent POM gets updates once every six months, the basic build verification CI pipeline only changes to the latest language release, etc…

    Projects which try to embed gitflow into a pom or integrate CD into the gradle file are the unbuildable messes I get asked to fix.


  • Maven has a high learning curve, but once learned it is incredibly simple to use.

    That high bar is created by the tool configuration. You can change and hack everything, but you have to understand how Maven works to do so. This generally blocks people from doing really stupid things, because you have to learn how maven works to successfully modify it and in doing so you learn why you shouldn’t.

    This is the exact weakness of Gradle, the barrier for modification is far lower and the tool is far less rigid. So you get lots of people who are still learning implement all sorts of weird and terrible practice.

    The end result is I can usually dust off someone elses old maven project and it will build immediately using “mvn clean install”, about half the gradle projects I have been brought in on won’t without reverse engineering effort because they have things hard coded all over them. A not small percentage are so mangled they can’t be built without the dev who wrote it’s machine.

    Also you really shouldn’t be tinkering with your build pipelines that much. Initial constraints determine the initial solution, then periodically you review them to improve. DevSecOps exists to speed development and ease support it isn’t a goal in of itself


  • Tesla actually market it as a positive.

    Car manufacturers have to setup different manufacturing lines to provide different feature levels. Tesla argue this makes them more expensive. Tesla cars have all features installed, just disabled and the optional extra packages are cheaper compared to their rivals as a result.

    To be honest there is a certain logic, if you’ve ever been in a Ford Focus LX (bottom range) its pretty clear they had to spend quite a bit of money on more basic systems. I honestly thought each LX was sold at a loss



  • I want a build job to be triggered when a merge request is raised/changed to verify merge requests. Primarily I want it to comment/annotate changes so peer review focusses on logic and warnings are clear.

    I can do this with Concourse, Circle, Jenkins and Github Actions on Azure Devops, Bitbucket Cloud, Bitbucket Server & Github. All Gitlab can tell you is pass/fail, which was good in 2003 but seriously lacking in 2023.

    Similarly I want the ability to trigger a release and supply a desired version for the release (or someway to achieve that since our projects follow semantic versioning).

    The release DSL is incomplete and could not work on server/cloud last time I used it. The page claims it can do alot but there is a hole in it and even the writer clearly knew.

    I want the ability to specify multiple reusable pipelines, in a central place. This is not possible in cloud.

    Lastly I would like to have multiple potential pipelines in a repository (e.g. smoke test and release). You can hack this in via variables. This will/won’t work depending specifically on the runner for your job. if self hosted or cloud you’ll notice different parsing behaviour depending on what host it runs on. This is shocking.

    I have an email somewhere where I went through every GitLab CI DSL and documented which didn’t consistently work, which only worked consistently on cloud and which only worked on server. Also things like release that are broken on both.

    The only way to make it work is to use multi stage docker builds and if your doing that build bot and a bash script would be better.



  • Activity Pub has a few parts, Lemmy implements the Threaded message part.

    Mastodon implements a short messaging (posts) part. Meta’s Threads will implement this.

    KBin implemented both parts, within KBin you’ll see microblog as an option for magazines (communities/subredits). This shows either ‘posts’ made to the magazine or posts with hastags associated with the magazine.

    The posts and threaded message parts have overlap in how they work so mastodon users can see certain threaded messages and comment on them.