Skip to main content
deleted 17 characters in body
Source Link
Journeyman Geek
  • 219.7k
  • 52
  • 408
  • 909

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a Dedi Clouddedicated server, though I might switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this -

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirit) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a Dedi Cloud, though I might switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirit) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a dedicated server, though I might switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this -

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirit) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

added 32 characters in body
Source Link
user152859
user152859

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a dediDedi Cloud, though I maymight switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirtspirit) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a dedi, though I may switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirt) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a Dedi Cloud, though I might switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirit) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

added 59 characters in body
Source Link
Journeyman Geek
  • 219.7k
  • 52
  • 408
  • 909

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a dedi, though I may switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirt) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a dedi, though I may switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

I just remembered this was a thing - during the period SE was getting DDOSed, I ran/set up the website uptime monitoring service I use for other things (uptime kuma) to check on a few key services for me - main SE and MSE chat, MSE, SO and SU. It checks that the site is up, and posts a message on a chatroom if there's an error.

This in theory gives me an independent way to check if the network is down and its a tool I'm running on other resources anyway.

As I understand it, the service makes a request, checks the status code, and reports if its unhealthy. I'm reasonably sure I'm within the request limit but I'm pondering setting a longer time between requests (I'm currently about a minute between site checks). I do check multiple sites though, so it'd be closer to 5, depending on how many requests a check is.

We are evaluating what an appropriate rate-limit would be to match our desired level of restriction. Our initial guess is that the new rate-limit will be set to around 60 requests per minute.

Would this be networkwide or per site?

There's mentions (on a deleted post?) that this may get extended to other cloud providers. I currently run most of my services on a dedi, though I may switch some of them to my home server. I'd like to think what I'm doing is of limited impact but if its something that could affect my overall access to SE, I'd consider it undesirable.

Assuming I suspect I'm affected by these restrictions - is there a path to check if I am and to mitigate the impact on the network? I'm currently running a dedicated server on scaleway but I might move in future.

As for tools like this

Technically this feels like ~90% of the behaviour y'all are trying to prevent (as written, though since I'm not scraping, less in spirt) which are automated non human interactions with the network. Practically, its a very useful tool as a avid SE user and hobbyist server admin. Should I be checking (and with whom) before setting up monitoring tools on the network?

Source Link
Journeyman Geek
  • 219.7k
  • 52
  • 408
  • 909
Loading