In addition to the mass of possible encodings @japhroaig mentions, there’s also the possibility (at the expense of some additional load on the server hosting the phishing page) that the URL encodes absolutely nothing of interest except a unique ID that the server uses to look up and display the correct personalizations.
That does require storing and looking up the values on the server side; but you don’t need all that many bits before even randomly generated IDs become vanishingly unlikely to collide with one another; and it simply isn’t possible to ‘decode’ the URL without having the stored IDs and values because the data simply aren’t in the URL. Even worse, using little chunks of user ID, session ID, etc. is something that wholly legitimate operators also do to direct you to the correct part of their site. This makes just blocking URLs with obscure looking blobs in them a difficult blanket policy.
(Though, doing it manually in certain cases may actually be good advice… It would be analogous to the old advice for screening phone scammers: if somebody claiming to be your bank calls you, you can’t reasonably verify that they are who they claim to be. However, if your bank is actually calling you about something, that something will be there if you call them, at whatever published or previously established number they had. In the same way, if you allegedly have an email from example.com any links it contains could be anything and probably are; but if you manually go to example.com and log in you skip any potentially malicious link while still seeing whatever the alert is, if genuine.)
A potentially stronger defense(though one that requires trusting a local password store, or at least hash store) might be intercepting inputs that match a given username/password pair on unusual domains.
If I am user@example.com, with password ‘password’, I should have very, very, limited reasons to be sending the user/‘password’ inputs to a different domain(even if I am being wicked and reusing passwords, I’ll still have a small number of domains I actually log into, for which I would be prompted once, and a whole internet of potential phishing domains or IPs that I should have no reason to ever log into).
If the username/password/domain associations, or just their hashes, were known by the browser it could look for form inputs that matched a given username/password pair; but did not match that pair’s associated domain. So attempting to send user/‘password’ to example.com would occur normally, while attempting to send user/‘password’ to malice.com would ask me if I was intentionally re-using credentials or if I had been tricked and would like to not submit that form.
I don’t like the dependence on local memory(particularly troublesome if you are logging in from a different system); but it does massively cut down on the number of dangerous pieces of data you have to look for, as well as ensuring that those pieces will show up in the clear at least once(since your browser handles presenting the form, accepting your input, displaying the characters or little password pips, it already sees your input at least once).