I run a fairly large forum, and I've been getting emails from linode That the CPU usage has been going over 90% multiple times a day, Yours have been complaining that the site has been taking up to five or six seconds to load. I checked the log, and I would keep getting hit with hundreds of connections and second from specific addresses, So I set up rate limiting with Cloudflare.
I thought everything was going well after that, until suddenly it started getting even worse. I started realizing that instead of one IP hitting the site a hundred times per second, it was now hundreds of IP's hitting the site Slightly below the Throttling threshold I had set up.
Who's doing this at such a high volume? Most of the data is static enough that there isn't value in frequent crawls, crawls are (probably) more expensive than caching, and small shops and hobbyists don't have the resources to move the needle.
rglover 5 hours ago [-]
> Some of the bots identify themselves, but some don't. Either way, the respondents say that robots.txt directives – voluntary behavior guidelines that web publishers post for web crawlers – are not currently effective at controlling bot swarms.
Is anybody tracking the IP ranges of bots or anything similar that's reliable?
It seems like they're taking the "what are you gonna do about it" approach to this.
Many bots use residential IP proxy networks, so they come from the same IPs that humans use
millipede 4 hours ago [-]
Information is valuable; we just weren't charging for it. AI is just bringing the market for knowledge back into equilibrium.
dehrmann 2 hours ago [-]
It looks more like information is valuable in aggregate.
CSMastermind 6 hours ago [-]
What's the solution here? Metered usage based on network traffic that gets shared with the website owners?
Otherwise everything moves behind a paywall?
the_snooze 4 hours ago [-]
>Otherwise everything moves behind a paywall?
Basically. Paywalls and private services. Do things that are anti-scale, because things meant for consumption at scale will inevitably draw parasites.
Analemma_ 5 hours ago [-]
For now the solution is proof-of-work systems like Anubis combined with cookie-based rate limiting: you get throttled if your session cookie indicates you scraped here before, and if you throw the cookie out you get the POW challenge again. I don't know how long this will continue to work, but for my site at least it seems to be holding back the deluge, for the moment.
darekkay 5 hours ago [-]
ai.robots.txt contains a big list of AI crawlers to block, either through robots.txt or via server rules:
Your link is missing the t at the end of .txt. You should be able to edit it though.
josefritzishere 5 hours ago [-]
I think the solution is criminal penalties.
johnea 6 hours ago [-]
This is an ever growing problem.
The model of the web host paying for all bandwidth was somewhat aligned with traditional usage models, but the wave of scrapping for training data is disrupting this logic.
I remember reading, about 10 years ago?, of how backend website communications (ads and demographic data sharing) had surpassed the bandwidth consumed by actual users. But even in this case, the traffic was still primarily linked to the website hosts.
Whereas with the recent scrapping frenzy the traffic is purely client side, and not initiated by actual website users, and not particularly beneficial to the website host.
One has to wonder what percentage of web traffic now is generated by actual users, versus host backend data sharing, and the mammoth new wave of scrapping.
superkuh 6 hours ago [-]
While catchy that headline kind of misses the point. It should be "Corporations are overwhelming websites with their hunger for AI data". They're the ones doing it and corporations are by far the most damaging non-human persons (especially since they are formed nowadays to abstract away liability for the damage they cause).
This is not some new enemy "bots". This is the same old non-human legal persons that polluted our physical world repeating things in the digital. Bots run by actual human persons are not the problem.
Analemma_ 5 hours ago [-]
I'm not sure that's true. As hardware gets cheaper, you're going to see more and more people wanting to build+deploy their own personal LLMs to avoid the guardrails/censorship (or just the cost) of the commercial ones, and that means scraping the internet themselves. I suspect the amount of scraping that's coming from individuals or small projects is going to increase dramatically in the months/years to come.
tartoran 6 hours ago [-]
RIP internet. It will soon make no sense to share something with the world unless you're in for profit. But who's gonna pay for it?
I thought everything was going well after that, until suddenly it started getting even worse. I started realizing that instead of one IP hitting the site a hundred times per second, it was now hundreds of IP's hitting the site Slightly below the Throttling threshold I had set up.
Is anybody tracking the IP ranges of bots or anything similar that's reliable?
It seems like they're taking the "what are you gonna do about it" approach to this.
Edit: Yes [1]
[1] https://github.com/FabrizioCafolla/openai-crawlers-ip-ranges
Otherwise everything moves behind a paywall?
Basically. Paywalls and private services. Do things that are anti-scale, because things meant for consumption at scale will inevitably draw parasites.
https://github.com/ai-robots-txt/ai.robots.tx
The model of the web host paying for all bandwidth was somewhat aligned with traditional usage models, but the wave of scrapping for training data is disrupting this logic.
I remember reading, about 10 years ago?, of how backend website communications (ads and demographic data sharing) had surpassed the bandwidth consumed by actual users. But even in this case, the traffic was still primarily linked to the website hosts.
Whereas with the recent scrapping frenzy the traffic is purely client side, and not initiated by actual website users, and not particularly beneficial to the website host.
One has to wonder what percentage of web traffic now is generated by actual users, versus host backend data sharing, and the mammoth new wave of scrapping.
This is not some new enemy "bots". This is the same old non-human legal persons that polluted our physical world repeating things in the digital. Bots run by actual human persons are not the problem.