• @catloaf@lemm.ee
    link
    fedilink
    English
    192 months ago

    An HTTP request is a request. Servers are free to rate limit or deny access

    • FaceDeer
      link
      fedilink
      182 months ago

      And Wikimedia, in particular, is all about publishing data under open licenses. They want the data to be downloaded and used by others. That’s what it’s for.

      • LostXOR
        link
        fedilink
        42 months ago

        Even so I think it would be totally reasonable for them to block web scrapers, as they provide better ways to download all their data.

        • FaceDeer
          link
          fedilink
          72 months ago

          At the root of this comment chain is a proposal to have laws passed about this.

          People can set up their web servers however they like. It’s on them to do that, it’s their web servers. I don’t think there should be legislation about whether you’re allowed to issue perfectly ordinary HTTP requests to a public server, let the server decide how to respond to them.

    • @taladar@sh.itjust.works
      link
      fedilink
      English
      122 months ago

      Rate limiting in itself requires resources that are not always available. For one thing you can only rate limit individuals you can identify so you need to keep data about past requests in memory and attach counters to them and even then that won’t help if the requests come from IPs that are easily changed.