[updated 20180930, read the comments below for more background]
People usually confuse the operating frequency of the computer with the max. frequency of its individual parts.
Let's say a CPU runs at 1GHz, that must mean each transistor switches 1 billion times per second, right ? Hahaha I'm kidding.
Actually the Ft (transition frequency) of transistors is way higher than that. And the whole circuit is slowed down by other factors such as wires, capacitances, resistances that make distributed RC networks along with the capacitances, and countless other factors. Of course, the CDP (critical datapath) length matters too.
But in average, I have observed a 1:50 ratio between the operating frequency of a processor versus the "speed" of the constituting transistors, for reasonable architectures. This might be lower for recent ultrapipelined processors but when you make your own discrete processor, divide the Ft by 50 to get your final processor's speed. A ratio of 100 is much more realistic for a hobby project but it's less optimistic...
The ratio of 50 is a realistic ceiling that shows the influence of parameters outside the transistor's ideal characteristic. One such influence is the type of logic gate (TTL, DTL, CTL, DCTL, ECL...) so you have to measure your individual inverter gate speed (for example with a ring oscillator) for a better estimate.
I'd be happy to get more datapoints from various architectures and implementations. A chart would help us identify the factors that inflate or decrease this ratio and give us a better prediction.
Note : this rule applies to transistors and semiconductors, not relays, where the delay is limited essentially by the contact switching speed and RC delays are irrelevant.