• @dangling_cat@lemmy.blahaj.zone
    link
    fedilink
    English
    714 months ago

    Now they just use LLM to generate formulas to calculate tariffs that fit their fantasy. Gosh I wish they actually taste their own failure for once not just constantly fail up.

        • 0xD
          link
          fedilink
          English
          2
          edit-2
          4 months ago

          Because it’s a naive take. Technology can help us make the world a better place for all - standing in the way are greedy pigs and asshole, ignorant politicians.

          “AI” isn’t the problem, our approach to it is.

          • @base@lemmy.world
            link
            fedilink
            English
            1
            edit-2
            4 months ago

            show me that magic llm which doenst infringe the copyright of nearly everybody, which doesnt have an unprecedentet impact on the enviroment to train and run, which wont be used to cut of people from their income, which isnt owned by technocratic billionaire assholes, which isnt peddled and promoted by fascist goverments, which doesnt halluzinate, which isnt a blackbox, which actually works and doesnt fail at the simplest tasks sometimes, which doesnt infringe my privacy(allegedly tbf, but you know). the list goes on.

            sure, sure technology itself isnt evil. but thats not a hill i want to die on in case of llms. they’re problematic on so many levels. full stop. i dont think its a naive take to have some skepticism against this clusterfuck. potentially some technologies could make the world a better place. but llms dont fit into this category.

          • @deathbird@mander.xyz
            link
            fedilink
            English
            1
            edit-2
            3 months ago

            Kinda sorta.

            AI, or rather LLMs, can barf out a lot of passable text quickly. That can be useful as a starting point for something useful, if a human mind is willing and able to review and repair it. It’s like having an idiot intern whom you can never really trust.

            But the number of people who use LLMs in a way that reflects and understanding of their limitations is diminishingly small. Most people just don’t assume that something that looks valid needs to be fully and critically reviewed. That’s why we’ve had multiple cases of lawyers having ChatGPT write theis legal briefs based on hallucinated legal precedent.

            • 0xD
              link
              fedilink
              English
              14 months ago

              That’s not a problem of the technology though, that’s human idiocy.

              • @deathbird@mander.xyz
                link
                fedilink
                English
                1
                edit-2
                3 months ago

                On the one hand, absolutely, human idiocy.

                On the other hand, as a society it behooves us to think about how to stop idiots from hurting themselves and others. With IT, and in the context of corpo marketing hype, I am deeply concerned about politicians using AI or allowing AI to be used to do things poorly and thus hurt people simply because they have too much faith in the tool or its salesmen. Like, for example, rewriting the Social Security database.

  • @tias@discuss.tchncs.de
    link
    fedilink
    English
    394 months ago

    If the safeguards can be so easily removed, what’s the point of putting them there in the first place

    • @sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      54 months ago

      Never attribute to AI that which can be adequately explained by incompetence.

      If you dig in to the tariff calculation, it’s simply trade deficit / 2, the other two variables are constants that cancel eachother out. AI would probably do a better job making a complex algorithm, this is just a lazy human adding some flourishes to BS their way through.

  • @Donjuanme@lemmy.world
    link
    fedilink
    English
    314 months ago

    Now I’m frightened to my core.

    AI doesn’t scare me.

    People making decisions off of AI scare me.

    The government mandating people use AI to make decisions frightens me to my core.

      • @Donjuanme@lemmy.world
        link
        fedilink
        English
        44 months ago

        That’s the crux of my statement, yes.

        AI doesn’t scare me.

        How people respond to it scares me

        And that it’s being prepared to drive/copilot government agencies scares the everliving shit out of me.

        • @KeenFlame@feddit.nu
          link
          fedilink
          English
          -34 months ago

          You are both blind. Be scared of the combo and stfu. It’s evil people with software that helps them. It’s history all over again. Not complicated. Stop arguing and fucking stop them

    • Uriel238 [all pronouns]
      link
      fedilink
      English
      74 months ago

      So far the Trump administration and the Federal government under him don’t need AI to justify stupid, globe-wrecking policy.

      AI told me to do [wrongful action] is no more a valid excuse than I was just following orders. At least not to an international tribunal or a (seriously peckish) public.

        • @pajam@lemmy.world
          link
          fedilink
          English
          44 months ago

          I was gonna say, it certainly allows insurance companies to launder their original intentions as “oopsy, AI made us deny all those claims (we wanted to deny anyway), don’t be mad at us” and then people bitch about the bad AI causing all these issues instead of the insurance company who wanted the exact same outcome, regardless of AI.

          All that said, I’m giving another recommendation that people go subscribe to Citations Needed.

  • Ebby
    link
    fedilink
    English
    29
    edit-2
    4 months ago

    I don’t necessarily oppose the use of AI as a tool for humans to utilize, but I do have issues with it dictating policies or control over human beings. By the people, for the people, absolutely does not include AI. (Sorry Data, not yet)

    Also, any prompts and prompt instructions should be public with results. It is just way too easy to fuck up.

  • katy ✨
    link
    fedilink
    English
    254 months ago

    if america is so dominant in ai why did one chinese open source llm take billions off the market cap

  • @skozzii@lemmy.ca
    link
    fedilink
    English
    234 months ago

    AI is wrong so often this is extremely scary.

    They can also do evil things and “blame” it on AI.

  • klobuerschtler
    link
    fedilink
    English
    124 months ago

    Just another attempt to rob American workers of their dignity. Workers of the US, you have been played!