Google’s carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company’s efforts to go green.

While the corporation has invested in renewable energy and carbon removal technology, it has failed to curb its scope 3 emissions, which are those further down the supply chain, and are in large part influenced by a growth in datacentre capacity required to power artificial intelligence.

The company reported a 27% increase in year-on-year electricity consumption as it struggles to decarbonise as quickly as its energy needs increase.

Datacentres play a crucial role in training and operating the models that underpin AI models such as Google’s Gemini and OpenAI’s GPT-4, which powers the ChatGPT chatbot. The International Energy Agency estimates that datacentres’ total electricity consumption could double from 2022 levels to 1,000TWh (terawatt hours) in 2026, approximately Japan’s level of electricity demand. AI will result in datacentres using 4.5% of global energy generation by 2030, according to calculations by the research firm SemiAnalysis.

  • Phil_in_here@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    12 days ago

    But it’s critically necessary for functionality.

    Got a question? Google it an boom there’s an AI summary for you. Now you’re engaged in scrolling past the dubious response and the sponsored links before you can get to the results you want.

    It’s called ‘enhancing the user experience’. It was tedious to ignore the paid ads where you were likely to be misled for profit, but now it’s enhanced tedium where you’re likely to be misled for no fucking reason.

  • Hirom@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    12 days ago

    the company also said AI could have a “net positive potential” on climate

    It’s going to get us Net Fucked by 2030

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    This is great. /s

    AI will never be able to do the stuff Sam Altman pretends is just around the corner. But it’s great at giving the Right plausible fake news machines. Let’s torch the earth for that!

  • ohwhatfollyisman@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    12 days ago

    ah, but on the flip side, ai can conjure up an email summary within seconds that can shave off up to 5 whole minutes from someone’s extremely busy day.

    surely that’s adequate recompense for all that energy spent?

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      12 days ago

      I’m looking forward to 10 years from now, when this new novelty called, WRITING YOUR OWN SHIT (and reading it!) comes back into prevalence, and everyone thinks it is such an original idea.

      (If we, or the Internet, are all still around then, anyway.)

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      12 days ago

      conjure up an email summary within seconds that can shave off up to 5 whole minutes

      … but can it? Like actually, can one do that?

      Sure an LLM can generate something akin to a summary. It will look like it’s getting some of the points in a different form… but did it get the actual gist of it? Did it skip anything meaningful that once ignore will have far reaching consequences?

      So yes sure an LLM can generate shorter text related to what was said during the meeting but if there is limited confidence in the accuracy and no responsibility, unlike somebody who would take notes and summarize potentially facing negative consequences, then wouldn’t the reliance on such a tool create more risk?