• Zexks@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    11
    ·
    2 days ago

    How is that any different than you. Objectively prove to everyone here than none of your opinions have ever been influence by anything youve ever seen, read or heard.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      Your own opinions are a result of much bigger amount of much more relevant data in any case.

      An AI model is a set of coefficients averaging a dataset by “one size fits all” measure. Those coefficients are found by an expensive process using criteria (again “one size fits all”) set by a company making it. From them its machine generates (looks up actually) the most probable text, it’s like a music box. A beautiful toy.

      So you have different motivations and abstract ideas in different situations, you also have something like a shared codebook with other people making decisions - your instincts and associations. Reading what they say or seeing what they do, you get a mirror model in you head, it might be worse, but it’s something very hard for text analysis to approach.

      That model doesn’t, it has the same average line for all situations, and also it can’t determine (on the level described) that it doesn’t know something. To determine that you don’t know something you need an abstract model, not a language model.

      I dunno what is their current state, all I’ve read and kinda understood was seemingly about optimization of computation for language models and structuring their application to imitate a syllogism system.

      I think with the current approaches making a system of translating language to a certain abstract model (tokenization isn’t even close to that, you need to have some topology with areas that can be easily merged or split instead of token points with distances, in any case) and abstract entities to language would be very computationally expensive.