• 0 Posts
  • 210 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • CEOs of companies that are adjacent to technology desperately want to ensure that their company isn’t seen as “outdated”, almost more than they want to actually not be outdated.

    So when a technology comes that everyone in tech leadership is saying is the bestest, they want to make sure everyone knows they’re totally with it, whatever the cool kids are talking about.

    Hype train goes chugga chugga.

    As the hype train slows, they still need to be onboard, but they set expectations based on what their people are actually telling them.

    So this is the CEO yelling to do something, and then the news slowly percolating back from the tech people that they can, but only a handful of projects can do so in a way that makes sense, has impact, and doesn’t disrupt a timeline or budget in a way that requires shareholder disclosure.


  • Oh, to me it just doesn’t remotely look like they’re interested in surveillance type stuff or significant analytics.

    We’re already seeing growing commercial interest in using LLMs for stuff like replacing graphic designers, which is folly in my opinion, or for building better gateways and interpretive tools for existing knowledge based or complex UIs, which could potentially have some merit.

    Chat gpt isn’t the type of model that’s helpful for surveillance because while it could tell you what’s happening in a picture, it can’t look at a billion sets of tagged gps coordinates and tell you which one is doing some shenanigans, or look at every bit of video footage from an area and tell you which times depict certain behaviors.

    Looking to make OpenAI, who seem to me to be very clearly making a play for business to business knowledge management AI as a service, into a wannabe player for ominous government work seems like a stretch when we already have very clear cut cases of the AI companies that are doing exactly that and even more. Like, Palantirs advertisements openly boast about how they can help your drone kill people more accurately.

    I just don’t think we need to make OpenAI into Palantir when we already have Palantir, and OpenAI has their own distinct brand of shit they’re trying to bring into the world.

    Google doesn’t benefit by selling their data, they benefit by selling conclusions from their data, or by being able to use the data effectively. If they sell it, people can use the data as often as they want. If they sell the conclusions or impact, they can charge each time.
    While the FBI does sometimes buy aggregated location data, they can more easily subpoena the data if they have a specific need, and the NSA can do that without it even being public, directly from the phone company.
    The biggest customer doesn’t need to pay, so targeting them for sales doesn’t fit, whereas knowing where you are and where you go so they can charge Arby’s $2 to get you to buy some cheese beef is a solid, recurring revenue stream.

    It’s a boring dystopia where the second largest surveillance system on the planet is largely focused on giving soap companies an incremental edge in targeted freshness.



  • Yes, neither of us is responsible for hiring someone for the OpenAI board of directors, making anything we think speculation.

    I suppose you could dismiss any thought or reasoning behind an argument for a belief as “reasons” to try to minimize them, but it’s kind of a weak argument position. You might consider instead justifying your beliefs, or saying why you disagree instead of just “yeah, well, that’s just, like, your opinion, man”.



  • Those aren’t contradictory. The Feds have an enormous budget for security, even just “traditional” security like everyone else uses for their systems, and not the “offensive security” we think of when we think “Federal security agencies”. Companies like Amazon, Microsoft, and Cisco will change products, build out large infrastructure, or even share the source code for their systems to persuade the feds to spend their money. They’ll do this because they have products that are valuable to the Feds in general, like AWS, or because they already have security products and services that are demonstrably valuable to the civil security sector.

    OpenAI does not have a security product, they have a security problem. The same security problem as everyone else, that the NSA is in large part responsible for managing for significant parts of the government.
    The government certainly has interest in AI technology, but OpenAI has productized their solutions with a different focus. They’ve already bought what everyone thinks OpenAI wants to build from Palantir.

    So while it’s entirely possible that they are making a play to try to get those lines of communication to government decision makers for sales purposes, it seems more likely that they’re aiming to leverage “the guy who oversaw implementation of security protocol for military and key government services is now overseeing implementation of our security protocols, aren’t we secure and able to be trusted with your sensitive corporate data”.
    If they were aiming for security productization and getting ties for that side of things, someone like Krebs would be more suitable, since CISA is a bit more well positioned for those ties to turn into early information about product recommendations and such.

    So yeah, both of those statements are true. This is a non-event with bad optics if you’re looking for it to be bad.




  • Man, if they could get sign off to use the Olympics logo, it would more than make up for donating almost any number of condoms just in advertising options.

    Side by side shots of different pairs of pole vaulters flopping onto their landing mats. Scenes of different sports, starting with slow ones and cuts to different ones. Slowly, it starts to jump to faster sports, where the athletes are making more vocalizations, by the end it’s just a focus on curlers furiously brooming while they all do their excited yells of joy and then a moment of silence while we zoom in on some shotput throwers faces just as they’re throwing, and then cut to a rapid series of divers splashing into the water, audio overlay of a soccer commentator screaming “goal”, and then a pan across the cheering crowd. “Trojex: for when the world comes together”, with five overlapping condoms in the background, fading to the Olympic logo.




  • I don’t think I implied that we couldn’t leave, or even that we shouldn’t. I said that Cuba’s not going to get us to leave by asserting that the agreement was never valid, because that’s just going to get the response of “yes it is”. For better or worse nations negotiate backed with weapons, and a power imbalance is inevitable.
    It’s not even a matter of right or wrong, just reality. Few would argue that the Japanese constitution is illegitimate and that power should rightly devolve back to the Empire of Japan.

    You have some misapprehensions about the embargo of Cuba. It’s sometimes called a blockade for rhetorical effect, but it’s not actually a blockade.
    It’s not “enforced” from Guantanamo bay, it’s enforced by civil penalties levied by the Treasury department on US entities and their subsidiaries, and to a limited extent by the department of state through threats of potential trade or diplomatic consequences.

    Cuba can and does trade with other nations, including US allies, and even the US. The harm the embargo does is via sharply limiting the availability of the lines of credit smaller nations rely on for continuing development of their infrastructure, not by literally preventing boats full of food from landing. Additional harm is done by denying them access to the largest convenient trading partner in the region for non-food, non-medical (embargo terms have excluded those items for decades) trades which further harms their economy by denying them a reliable cash influx their neighbors rely on, as well as making imports more expensive through sheer transport distance.

    Justified or not, and regardless of poor negotiating position, refusal to engage in a dialogue is not helping Cuba’s position.
    They have their own ideological motivations for refusing to engage. Even a tacit acknowledgement that maybe they shouldn’t have nationalized the assets of US companies without compensation would get them a lot of negotiation credit, and it costs them nothing, except for the ideological factors. The US doesn’t get much out of it, and $6 billion 1959 can be written off fairly easily for the PR win.

    One side doesn’t need to budge, and the other one refuses, and they both have their reasons. I believe that was the point OP was going for.


  • That’s not the case, you just need to be able to make an outbound connection.

    The minutiae of how certbot works or if that specific person actually did it right or wrong is kind of aside the point of my “intended to be funny but seemingly was not” comment about how sometimes the easiest solution to implement is the one you remember, even if it’s overkill for the immediate problem.


  • It’s a bit of a non-story, beyond basic press release fodder.

    In addition to it’s role as “digital panopticon”, they also have a legitimate role in cyber security assurance, and they’re perfectly good at it. The guy in question was the head of both the worlds largest surveillance entity, but also the world’s largest cyber security entity.
    Opinions on the organization aside, that’s solid experience managing a security organization.
    If open AI wants to make the case that they take security seriously, former head of the NSA, Cyber command and central security service as well as department director at a university and trustee at another university who has a couple masters degrees isn’t a bad way to try to send that message.

    Other comments said open AI is the biggest scraping entity on the planet, but that pretty handily goes to Google, or more likely to the actual NSA, given the whole “digital panopticon” thing and “Google can’t fisa warrant the phone company”.

    Joining boards so they can write memos to the CEO/dean/regent/chancellor is just what former high ranking government people do. The job aggressively selects for overactive Leslie Knope types who can’t sit still and feel the need to keep contributing, for good or bad, in whatever way they think is important.

    If the US wanted to influence open AI in some way, they’d just pay them. The Feds budget is big enough that bigger companies will absolutely prostrate themselves for a sample of it. Or if they just wanted influence, they’d… pay them.
    They wouldn’t do anything weird with retired or “retired” officers when a pile of money is much easier and less ambiguous.

    At worst it’s open AI trying to buy some access to the security apparatus to get contracts. Seems less likely to me, since I don’t actually think they have anything valuable for that sector.




  • This is confusing to me, because the point of the request seems to be “get a certificate”, not “get a self signed certificate generated by running the openssl command”. If you know how to get the result, it doesn’t really matter if you remembered offhand the shitty way or the overkill way.

    Is it really more helpful to say “I remember how to do this, but let me lookup a different way that doesn’t use the tools I’m familiar with”?


  • Do you think that, in this example, using certbot is fucking shit up, or breaking something?

    The thing about overkill is that it does work. If you’re accustomed to using a solution in a professional setting, it’s probably both overkill and also vastly more familiar than the bare minimum required for a class project that would be entirely unacceptable in a professional setting.

    In OPs anecdote, they did get their certificates, so I don’t quite see your “intentionally fucking things up” claim as what’s happening.