Hey fellow Lemmings,
I’m thrilled to announce the launch of AI News Summary Bot, a project that brings you News summary! The bot is now live on our community at !news_summary@lemmy.dbzer0.com.
The bot is still in its early stages, and I’m excited to hear your feedback and suggestions on how to improve it. Feel free to share your thoughts and ideas.
Repository: If you’re interested in contributing or exploring the code behind the bot, you can find the repository at https://github.com/muntedcrocodile/ai_news_bot.
Donations: If you’re interested in donating to allow me to spend more time developing please do: monero:8916FjDhEqXJqX9Koec9WaZ4QBQAa6sgW6XhQhXSjYWpQiWB42GsggEh73YAFGF86GU2gEE1TTRdWSspuMgpWGkiPHkgBTX
Stay informed, and let’s build this community together!
EDIT: grammar
“AI” and "News’ should be as far apart as “good idea” and "bad idea’.
It clearly states what it is and it stays in its own community. I don’t see a problem here.
Have you seen modern journalist? AI is already much better than the shit they spit out
Removed by mod
Ye no shit. I’m just saying that the journalist already write slop and would be better by AI.
The AI summarizing their slop just sounds better because half of journalist think they are Stephen King instead of writing concise informational pieces. The AI will cut out the shit and just give the info in the summary
If u went and actually looked at what it did u would realise my implementation is not as bad as it sounds.
EDIT: plus its foss so u can go check the source
The concept is inherently flawed when you introduce an aspect (LLM) that can and will hallucinate (read: make shit up) when it’s trying to present reality.
As far as I’m concerned, there is no place for that anywhere remotely close to news.
correct, but humans also exaggerate and lie a lot in the news, so maybe this AI could look through different sources and identify inaccuracies.
I haven’t looked the source code tho…
After checking the source code, well… it just summarizes the posts. Doesn’t help much with the human error problem.
But as mentioned by OP, it’s in early stage of development, and they plan to add features to “find the missing perspectives on an issue” and analyze political alligmnent information. So in the future maybe it could become a useful tool.
The model i have used gives a 60% identical summary to that provided by a human. And has an overall conceptual accuracy of >95% i was very carefull with my model selection and implementation as to ensure hallucinations are extremely rare if at all possible. Im not just feeding in “summarise this: <text>” to a general purpose llm (known for hallucinations) i break the article into chunks at sentence breaks then make a summary of that chunk directly by passing it to a purpose build summarisation model.
Have you spotted any hallucinations so far? I’m curious about what kind of hallucinations can be created when a LLM summarizes a text.
Yep.
The whole original article text wss “Advertisment” and the ai spat out “Advertisement. A ad. Click here for a link to the edd sa s.”
It Obviously couldnt summarise 1 word into multiple words anf thus tried its best.
Only 1 so far
Ok that’s funny XD
At least it won’t be harmful in any way.
I’m sure you know, but you’re probably going to get a lot of grief for this. I’m deeply suspicious of any new AI tool, especially one that tries to get in between me and my news (looking at you Feedly), and I’m sure I’m not the only one. So if you’re not already, I’d prepare yourself for a lot of strong emotions, and probably not in a good way.
If you wanted to get ahead of that kind of thing, you might want to explain what kinds of safeties you’re building into it. For example, on your roadmap you say want it to “Generate argument of for and against perspective then summarise the result of the 2 arguments.” This kind of thing in particular is quite risky. Any time you try to introduce value statements into an LLM summary, you’re in the danger zone. Even if you’re just trying to summarize the actual perspective of the piece, you’re basically just begging the LLM to hallucinate. But asking it to summarize hypothetical opposing arguments is just asking for trouble.
I could go on, but I don’t want to start a pile on. I appreciate when folks try to build cool stuff, you’ve just waded into some choppy waters…
Ohh im expecting mass outrage. I was really pissed when lemmy bullied auto tldr bot to death so i created my own better version (in its own community so u dont have to see it if u dont wanna see it).
Im currently using Falconsai/text_summarization as the summaries model. It seems to be very good at non bias general text summarisation.
Hey, I appreciate the work. No bullshit, it’s a great idea, and the way you implemented it as its own C/ is perfect.
That being said, it’s too much. It was essentially a wall of nothing but the bot for me. I’m not sure if that’s because there was just that much for it to scrape with it being new, if it needs a rate limitation to keep it from flooding, or maybe the list of sources needs to pared down.
But it definitely interfered with accessing human posts by sheer volume. Which is the bad thing about bots.
I don’t know Jack shit about how bots work under the hood, but it definitely needs some kind of change to how much it’s posting.
Again, I think the idea is great and I was initially happy about it. Thanks for doing something to help us all stay updated.
Thx for the support. Yeah i agree it is a lot but i think thats mostly a byproduct of it being new and thus having a multi day backlog of articles to catch up with.
I have a daily brief in the roadmap to give u a summary of all important things that happened in the last 24hours so that should be far less spammy but i fear that will just give u a giant wall of text.
I also think it will be better once there is some voting happening that should reduce the amount of content u actually see (well at least on balanced sorting)
An AI is as good as its sources, and skimming through the domains from the posts, quite a few of those don’t seem like very reliable ones.
feel free to reccommend an rss feed u think is more reliable.
Wow, a really interesting project! Fuck the haters OP! Checking it out rn!
Thx. The haters can go block the bot/instance so i think their hating is very pointless.
I like it. Reminds me of the tldr bot about a year ago. I loved that thing
Ah good, an announcement for a new community to block.
Its just regular news feed with an ai summary. It still got the links to the og articles.
Regular from where? Living in Europe I don’t care for news outside my country.
Ive aimed for a world news target atm but u can run the bot urself and point it at any combinations of rss feeds
No thanks.
Thx for telling me.
Blocked.
Thanks, it was very useful to know that.