• Electricd@lemmybefree.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    Most objective article (sarcasm)

    In fact it has a whole-ass “AI” chatbot product, Duck.ai, which is bundled in with DuckDuckGo’s privacy VPN for $10 a month

  • Electricd@lemmybefree.net
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 hours ago

    Most people wanting AI probably don’t use DDG though. Else they would use Brave Search I guess

    I haven’t seen the poll though

  • Tyrq@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    10 hours ago

    I would like to petition to rename AI to

    Simulated
    Human
    Intelligence
    Technology

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    14 hours ago

    Okay, so that’s not what the article says. It says that 90% of respondents don’t want AI search.

    Moreover, the article goes into detail about how DuckDuckGo is still going to implement AI anyway.

    Seriously, titles in subs like this need better moderation.

    The title was clearly engineered to generate clicks and drive engagement. That is not how journalism should function.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      That is the title from the news article. It might not be how good journalism would work, but copying the title of the source is pretty standard in most news aggregator communities.

    • LobsterJim@slrpnk.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 hours ago

      Unless I’m mistaken this title is generated to match the title at the link. Are you saying the mods should update titles to accurately reflect the content of the articles posted?

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    17 hours ago

    I would have no problem with AI if it could be useful.

    The problem is no matter how many times I’m promised otherwise it cannot automate my job and talk to the idiots for me. It just hallucinates a random gibberish which is obviously unhealthful.

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      2 hours ago

      Prompt or model issue my dude

      Or you’re one of the few who have a pretty niche job

      Just things like different words or vocabulary, or helping with some code related knowledge, Linux issues… or even random known knowledge that you happen not to know

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      I’ve found it useful for a few things. I had had a song intermittently stuck in my head for a few years and had unsuccessfully googled it a few times. Couldn’t remember artist, name, lyrics (it was in a language I don’t speak) - and chatGPT got it in a couple of tries. Things that I’m too vague about to be able to construct a search prompt and want to explore. Stuff like that. I just don’t trust it with anything that I want actual facts for.

      • kossa@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        60 minutes ago

        Yep, preparing a proper search is my use case. Like “how is this special screw called?”. I can describe the screw and tell the model to provide me a list of how that screw could be called.

        Then I can search for the terms of that list and one of the terms is the correct one. It’s way better than hoping that somebody described the screw in the same words in some obscure forum.

        But, is it worth to burn the planet, make RAM, GPUs, hard drives unaffordable for everybody and probably crash the world economy for a better screw search? I doubt it.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 hours ago

      It’s really good at answering customer questions for me, to be honest.

      But, I still have to okay it. Just in case. There’s no trust.

      However that still does take a lot less bandwidth for me because I’m not good at the customer facing aspects of my business.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      I still would, as the increased productivity, once again, does not lead to reduced hours. Always more productive, always locked into a bullshit schedule.

  • dantheclamman@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    4
    ·
    21 hours ago

    I think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      20 hours ago

      Right? Like let me choose if and when I want to use it. Don’t shove it down our throats and then complain when we get upset or don’t use it how you want us to use it. We’ll use it however we want to use it, not you.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        20 hours ago

        I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.

        https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

        When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.

        Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.

        According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

        The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

        • Regrettable_incident@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          They were trying to argue that it was legally responsible for its own actions? Like, that it’s a person? And not even an employee at that? FFS

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            12 hours ago

            You just know they’re going to make a separate corporation, put the AI in it, and then contract it to themselves and try again.

        • NotAnonymousAtAll@feddit.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          15 hours ago

          ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

          That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.

          There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            12 hours ago

            It’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.

            It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          …what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 hours ago

            It makes sense to do it, it’s just along the lines of evil company.

            If they lose, it’s some bad press and people will forget.

            If they win, they’ve begun setting precedent to fuck over their customers and earn more money. Even if it only had a 5% chance of success, it was probably worth it.

  • Suavevillain@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    18 hours ago

    AI is not impressive or worth all the trade offs and worse quality of life. It is decent in some areas but mostly grifter tech.

  • kaotic@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 hours ago

    Don’t build AI into everything and assume you know how your users want to use it. If they do want to use AI, give me an MCP server to interact with your service instead and let users build out their own tooling.

  • Young_Gilgamesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    1 day ago

    Google became crap ever since they added AI. Microsoft became crap ever since they added AI. OpenAI started losing money the moment they started working on AI. Coincidence? I think not!

    Rational people don’t want Abominable Intelligence anywhere near them.

    Personally, I don’t mind the AI overviews, but they shouldn’t show up every time you do a search. That’s just a waste of energy.

    • MBech@feddit.dk
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      1
      ·
      23 hours ago

      Google became crap about 10 years ago when they added the product banner in the top, and had the first 5-10 search results be promoted ads. Long before they ever considered adding AI.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Google became crap shortly after their company name became a synonym for online searches. When you don’t have competitors, you don’t have to work as hard to provide search results – especially if you’re actively paying Apple not to come up with their own search engine, Firefox to maintain Google as their default search engine, etc. IMO AI has been the shiny new thing they’re interested in as they continue to neglect search quality, but it wasn’t responsible for the decline of search quality.

      • parricc@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        22 hours ago

        Time is sneaking up on us. It’s not even 10 years anymore. It’s closer to 20. 💀

      • Young_Gilgamesh@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        22 hours ago

        I guess. And then they removed the “Don’t be evil” motto just to drive the point home.

        But you have to agree, the company DID become even worse once they started using AI.

        • MBech@feddit.dk
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          Oh absolutely. It’s just important to remember that they’ve been horrible for a long time, and has shown more ads in a single search than your average 30 minute youtube video.

    • Spaniard@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      Google and Microsoft were crap before AI, I don’t remember when google removed the “don’t be evil” but at that point they have been crap for a few years already.

    • fleton@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      22 hours ago

      Yeah google kinda started sucking a few years before AI went mainstream, the search results took a dive in quality and garbage had already started circulating to the top.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        23 hours ago

        I mind them. Nobody at my workplace scrolls beyond the AI overview and every single one of the overviews they quote to me about technical issues are wrong, 100%. Not even an occasional “lucky guess”.