• The dotcom bubble was famously composed of companies who raked in investor money as long as they included some vague allusion to the “internet” in their business plan. Most of them were useless websites who weren’t even worth the disk space where they were hosted.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      308 days ago

      It’s absolutely not useless to the average person. AI can do tons of useful things already. Just a few examples off top of my head are grammar/spell check assistant, text to speech narrations, translations, image descriptions for visually impaired, subtitle generation, document summaries, and language learning.

      I find these tools also work great as sounding boards, and they can write code to varying degrees. While people sneer at the fact that they often produce shitty code, the reality is that if somebody has a problem they need automated their only option before was to pay thousands of dollars to a software developer. If a kludgy AI generated script can solve their problem then it’s still a win for them.

      • MizuTama [he/him, any]
        link
        fedilink
        English
        27 days ago

        I think there should be gentle pushback for the language learning aspect as I’ve definitely had it mangle intent when seeing how it translates and interprets things in my second language, as well as grammar and it’s somewhat rigid approach for grammatical rules but both of those are somewhat contextual and are mostly because from my experience LLM is best in contexts where you know enough to correct it and if you’re using it for those two, you won’t notice any particular peculiarities. If you mean the narrow context of you needing a reminder for rules that you mostly know already, then I agree it can be useful.

        For context regular translations by humans and old-school ML translation have the same intent and meaning issues, ML to a much worse degree than both LLM and humans in my experience, so I frankly don’t find an issue with it in a translation context.

        I like to call LLMs the whatchamacallit machines, as the handful of times I’ve interacted with it, it worked best in contexts where I needed something I would know when I saw it but couldn’t generate.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          English
          27 days ago

          I’ve been using this app to learn Mandarin, and the AI chat bot in it seems to work really well https://www.superchinese.com/

          I can imagine that it might fail at something very nuanced, but at my level it’s really useful because I just need basic practice and being able to have it do casual conversation and check my pronunciation is incredibly helpful.

          I like to call LLMs the whatchamacallit machines, as the handful of times I’ve interacted with it, it worked best in contexts where I needed something I would know when I saw it but couldn’t generate.

          In general, that’s the rule of thumb I have as well with these things. It’s most useful in a context where you understand the subject matter well, and you can make good independent judgments on correctness of the output.

          • MizuTama [he/him, any]
            link
            fedilink
            English
            17 days ago

            I can imagine that it might fail at something very nuanced, but at my level it’s really useful because I just need basic practice and being able to have it do casual conversation and check my pronunciation is incredibly helpful.

            Oh in that case yeah, if you just need the basics tends not to be too bad, I feel once you close in on intermediate it starts to fall off but so do a lot of tools at that point.

      • @bobs_guns@lemmygrad.ml
        link
        fedilink
        English
        128 days ago

        Image generation can also be somewhat useful for language learning if you want to make a very specific illustration for a flashcard or include some mnemonics in the image. It’s not useless, but the path to profitability for LLMs is not very good.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          English
          98 days ago

          For sure, I expect that the most likely outcome is that LLMs will be something you run locally going forward unless you have very specific needs for a very large model. On the one hand, the technology itself is constantly getting better and more efficient, and on the other we have hardware improving and getting faster. You can already run a full blown DeepSeek on a Mac studio for 8k or so. It’s a lot of money, but it’s definitely in the consumer realm. In a few years the cost will likely drop enough that any laptop will be able to run these kinds of models.

      • RedSailsFan [none/use name]
        link
        fedilink
        English
        3
        edit-2
        8 days ago

        think this is the first time ive seen you talk positively about AI and not have someone come in to start an argument with you lol