Guessing the URL, however, is blind. It requires being on the right domain (and subdomain)
However, most respectable spiders don't "guess" at sites, they just follow links’
Considering major search engines not to be respectable is a defensible position, but it doesn't change the fact that they do more than follow links. In particular, search engines can and do enumerate DNS entries, so the mere existence of a subdomain is a risk.
A lot of stuff ends up on Google even though people swear they never linked to it from anywhere and Google doesn't return any page that links to the site.
That's in addition to the problem that people generally don't treat URLs as confidential, and that URLs appear in all kinds of places such as server, browser and proxy logs. URLs are also visible to, and used by, many more browser extensions than passwords. If the “hidden” site has outgoing links, the URL is likely to appear in Referer: headers.
There's also the risk that through a misconfiguration, a link to the hidden site appears in a non-hidden place, for example if the hidden site is hosted on a site that offers a local search facility.
The login page is linked from a website - it's a visible wall for an attacker to beat on. It's evidence that something exists worth attacking for.
That doesn't make sense. Use decent software and a randomly-generated password, and there's no attack surface worth pursuing. In contrast, a hidden directory doesn't even look like something worth attacking, it looks like something that's open to the public.
A secret URL is particularly risk-prone because if the URL is leaked accidentally and a search engine discovers it, the whole site content will become exposed through that search engine. A password doesn't fail as catastrophically: if the password is leaked, it still takes some voluntary action for someone to start downloading the data, it doesn't automatically start a machinery that will publish it for everyone to see.