As a former field tech, I just thought I’d throw in the usual theory and practice caveat…
In theory there usually isn’t much difference between theory and practice, but in practice there usually is.
Which is to say
One) the magnetizing of the surrounding area is a real problem that happens to real people. Not everyone, not all the time, but it happens often enough that there’s an FAQ on it. Any field tech who installs contact sensors knows about this one.
Two) it’s not at all clear that Z wave is better at getting through obstructions than zigbee. You could ask four electrical engineers and get four different answers on this one.
It’s absolutely true that lower frequencies are a more concentrated burst. That’s the theory part. But the practice part has to take into account the device itself and how it is designed to deal with issues like dispersion.
Zigbee uses DSSS. Z wave uses FSK. That alone makes zigbee better at getting through rain even though its raw frequency is more subject to dispersion than Zwave’s. That’s because zigbee was designed knowing that dispersion might be an issue, so the protocol includes some try again countermeasures.
There are a lot of other similar highly technical distinctions between the two protocols. But the end result is that they’re both pretty similar in practice, and the differences of implementation in specific individual devices are going to be much greater than just a general protocol statement, let alone issues of raw frequencies.
If you’d like to see a bunch of electrical engineers arguing over exactly this point, here’s a thread at stack exchange.
But the short answer is that theoretically one may be superior to the other, but in practice, there’s a lot of engineering that goes into both the protocol and individual devices and you just need to evaluate individual devices individually.