- cross-posted to:
- technology@lemmy.ml
- nottheonion@lemmy.ml
- lobsters@lemmy.bestiver.se
- cross-posted to:
- technology@lemmy.ml
- nottheonion@lemmy.ml
- lobsters@lemmy.bestiver.se
I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?
Because you are coming from the perspective of a reasonable person
These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already
Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.
I just want to keep using uncensored AI that answers my questions. Why is this a good thing?
Because it only harms bots that ignore the “no crawl” directive, so your AI remains uncensored.
Good I ignore that too. I want a world where information is shared. I can get behind the
Get behind the what?
Perhaps an AI crawler crashed Melvin’s machine halfway through the reply, denying that information to everyone else!
That’s not what the
no follow
command meansdon’t worry, information is still shared. but with people. not with capitalist pigs
Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.
The training data for LLMs come from the internet, and the internet is full of idiots.
LLM is a subset of AI
That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food
Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.
It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.
The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.
Wasteful?
Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.
Did the LLMs tell you that? It’s not hard to look up on your own:
Data centers, in particular, are responsible for an estimated 2% of electricity use in the U.S., consuming up to 50 times more energy than an average commercial building, and that number is only trending up as increasingly popular large language models (LLMs) become connected to data centers and eat up huge amounts of data. Based on current datacenter investment trends,LLMs could emit the equivalent of five billion U.S. cross-country flights in one year.
Far more than straightforward search engines that have the exact same information and don’t make shit up half the time.
Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.
I think the negativity is around the unfortunate fact that solutions like this shouldn’t be necessary.
I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.
I guess this is what the first iteration of the Blackwall looks like.
Gotta say “AI Labyrinth” sounds almost as cool.
Burning 29 acres of rainforest a day to do nothing
Bitcoin?
“I used the AI to destroy the AI”
We had to kill the internet, to save the internet.
We have to kill the Internet, to save humanity.
And consumed the power output of a medium country to do it.
Yeah, great job! 👍
We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It’s really fun to watch if it wasn’t so sad.
So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.
There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?
That’s just BattleBots with a different name.
You’re not wrong.
They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.
Then get bored when it devolves into a wedge meta.
Somehow one of them still invents Tombstone.
Putting a chopped down lawnmower blade in front of a thing, and having it spin at harddrive speeds is honestly kinda terrifying…
this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.
Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots – which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot – and simply IP ban them. This is considerably simpler, and doesn’t require an entire nuclear plant powered AI to combat the opposition’s nuclear plant powered AI.
In fact, anybody who doesn’t exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to “learn” about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
and try to slam your site with like 200+ requests per second
Your solution would do nothing to stop the crawlers that are operating 10ish rps. There’s ones out there operating at a mere 2rps but when multiple companies are doing it at the same time 24x7x365 it adds up.
Some incredibly talented people have been battling this since last year and your solution has been tried multiple times. It’s not effective in all instances and can require a LOT of manual intervention and SysAdmin time.
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
Yep. After you ban all the easy to spot ones you’re still left with far too many hard to ID bots. At least if your site is popular and large.
Not exactly how I expected the AI wars to go, but I guess since we’re in a cyberpunk world, we take what we get
Next step is an AI that detects AI labyrinth.
It gets trained on labyrinths generated by another AI.
So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn’t get lost.
It’s gonna be AI all the way down.
LLMs tend to be really bad at detecting AI generated content. I can’t imagine specialized models are much better. For the crawler, it’s also exponentially more expensive and more human work, and must be replicated for every crawler since they’re so freaking secretive.
I think the hosts win here.
All the while each AI costs more power than a million human beings to run, and the world burns down around us.
The same way they justify cutting benefits for the disabled to balance budgets instead of putting taxes on the rich or just not giving them bailouts, they will justify cutting power to you before a data centre that’s 10 corporate AIs all fighting each other, unless we as a people stand up and actually demand change.
Vote Blue No Matter Who
Any Democrat is Better than Any Republican
Plenty of Democrats are voting to put trump nominees in office, plenty are voting on partisan spending bills. The CR vote should tip you off that any democrat is not better than any republican… half of them are complicit too. 10 Senate Dems just financed this authoritarian takeover.
Not a single Democrat voted to confirm Hegseth and 3 Republicans also didnt but he still got confirmed.
Every single Democrat was present and voted no for the Budget which passed the House and it still passed.
Even if 10 dems voted not to shutdown government and enter congressional recess, the CR only exists because Republicans wrote it and won’t compromise.
Any Democrat is Better than Any Republican.
Scheumer rubberstamped autocracy by not filibustering the CR. I think anyone who protects the constitution and their constituents is better than someone who doesn’t. Not that any repuclicans fit the bill, but its not like we can just trust any old democrat. Look at Gavin Newsome sliding to the right to maintain power. That the kinda dems we want?
In my country blue is conservatives… But I agree with the sentiment! It worked for California, it can work for your whole country, let the Dems stop fearing they’ll lose elections, give them comfortable margins and then massively support progressives who can bring in the good stuff, they won’t have a chance if the party core thinks the very future of elections is on the line, but if they think they’ll likely win anyway, you might just be able to push through a progressive candidate and end the Neoliberal decay.
To be fair, California is kind of dysfunctional and constantly trips over its own regulations when trying to get anything built. For instance, needing excessive environmental impact review for things like trains that will obviously help the environment, or limiting ferry boats crossing the bay to protect the environment even though it likely results in more people driving instead.
It’s so rare to get positive reinforcement like this, these days. Thank you.
No
Then you will get Republicans
Blue no matter who is precisely how we got Trump.
This is the great filter.
Why isn’t there detectable life out there? They all do the same thing we’re doing. Undone by greed.
So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍
The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.
Doubly so once inference goes more on-device.
Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.
AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.
Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.
You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.
This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?
The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.