If you’re typing on a full size keyboard there’s a good chance that to your right, there’s a number pad.
The number 5 is in the middle – That’s to be expected, but is 1 in the top left or bottom left?
Being derived from an adding machine keypad, the number pad on a keyboard has a 1 will be in the bottom left, however in the 1950s when telephone keypads were being introduced, only folks who worked in accounting had adding machines.
So when it came time to work out the best layout, the result we have today was a determined through a stack of research and testing by Human Factors Engineering Department of Bell Labs who studied the most efficient layout of keys, and tested focus groups to find the layout that provided the best level of speed and accuracy.
That landed with the 1 in the top left, and that’s what we still have today.
Oddly ATM and Card terminals opted to use the telephone layout, rather than the adding machine layout, while number pads use the adding machine layout.
Relocating vast numbers of subscriber lines is something to be avoided.
In 1929 Indiana Bell realized they needed a larger telephone exchange (“CO” to use the US term) to meet growing demand, and while there was vacant land around the current building, it wasn’t large enough to build on with the current building slap-dab in the middle of it.
So rather than relocate the subscriber lines to a newly built exchange, they just moved the exchange to the rear of the block, to free up space to build a larger one.
Over a 4 week period engineers shifted the working, 8 story steel and brick telephone exchange, still fully staffed, around to the other side of the block, without any interruptions to the subscribers served from the exchange.
So this is the story of how in the 1960s AT&T’s Bell Labs bet on millimeter waves being the communications medium of the future, 60 years before 5G’s millimeter wave hype.
AT&T’s Bell Labs were working with millimeter waves aka “mmWave” in 5G speak, way back in the 1960s, but using waveguides instead of air as the transmission medium.
AT&T saw the vast amounts of bandwidth available in these bands, and were keen to utilize it. So does history repeat? Are there lessons in here about cursed mmWave bands?
At the time, AT&T’s Long Lines network operated a vast point-to-point Microwave network, spanning across the United States. It operated from 3.7Ghz to 4.2Ghz capacity planners and engineers knew, even with the best multiplexing, you were limited to how many channels you could cram into 500Mhz of space, so Bell Labs started looking for solutions.
Almost from the first, however, the possibility of obtaining low attenuations from the use of circular-electric waves, carrying with it, at the same time, the possibility of extremely high frequencies and accordingly vastly wider bands of frequencies appeared as a fabulous El Dorado always beckoning us onward.
Initially Bell Labs researchers looked at higher frequencies for these wireless links, but after experimenting with using centimeter wavelengths through the air and the issues with attenuation from rain and water vapour, more research was done and Bell Labs decided to use waveguides as the transmission medium for these millimeter wave transmissions, instead of transmitting through the air.
An exploratory development effort was begun in 1959 on a system utilizing 2-inch waveguide and travelling-wave-tube repeater, but was abandoned in 1962 because of TWT cost and reliability problems and because the capacity exceeded then-current Bell System needs.
Thanks to the recent development of IMPATT diodes and Solid-State devices, it was not abandoned for long, and research was picked up again in 1962. At the time Bell Labs didn’t need the additional capacity, nor did they know when it would be commercially viable to start using millimeter waveguide in the field, but like the 5G operators today, Bell Labs staff had seen the massive amounts of bandwidth available at these higher frequencies, and were looking to exploit it.
The idea at Bell Labs was to send information through such waves not by wires or broadcast towers but by means of the circular waveguide, which had been developed down in Holmdel. “A specially designed hollow pipe,” as Fisk defined it, the waveguide was just a few inches in diameter, and lined inside with a special material that would allow it to carry very high-frequency millimeter radio wave signals.
Around the same time the first MASERS were coming onto the market, and light (free space optics) was being considered instead of electrical energy as a transmission medium. Test shooting lasers through the air highlighted the high optic losses in air, showing this wasn’t practical as a transmission method. While optic fibres existed at the time their losses were so high as to make transmitting anything over a few meters impractical.
All the millimeter wave transmission in waveguide research culminated in the creation of the WT4 system, in the late 1970s.
A 60mm waveguide was used
Advertisement from the April 12, 1971 issue of Time magazine
Using two levels of Phase-Shift keying they were able to provide 238k concurrent calls of capacity, which they calculated could be doubled by moving to four levels of PSK.
On a 14km test system (Bell labs used SI units), they calculated they had the ability to carry almost half a million concurrent voice calls, and with 274 Mbps of bandwidth (DS-4), which for the 1970s was no mean feat.
Artist’s impression of a repeater station
Channelisation was achieved through the use of giant filters of all different types and flavours, to break the test system up into 124 channels (59 in each direction + protection) on the frequencies showing the lowest-losses in experimentation.
AT&T had historically installed cables, but unlike cables, Waveguides can’t bend, so are more akin to installing water or gas pipes.
This meant the installation of the waveguides into the field leveraged processes from the pipeline industry that were adopted for installation of the waveguides.
“Push sites” selected where a steel sheath (which essentially equated to lengths of hollow steel pipe) could be pushed in under the surface of the earth, with extra pipe welded onto the end as it was pushed along.
This created a clear, straight, conduit for the waveguide to be installed. Due to the fragility of the waveguides themselves, they were laid within the pipe on roller bearings to support the waveguide and to help it slide inside the steel sheath.
In tests AT&T were pushing almost 2.5 Km of waveguide in from one site, with extra lengths of waveguide (9m lengths) being joined by the special “waveguide splicing vehicle” and pushed into the sheath.
Repeater stations were equally tricky, Luckily the WT4 system only required repeater stations at intervals up to 60Km, although when going over hilly terrain, the bends in the waveguide increased losses, so would require repeaters at shorter intervals (~50Km). The inability to bend the cables required a tunnel under each repeater station, through which the waveguides would run, with the repeaters tapping off the waveguides below, via a network of filters. Like the microwave network, some of the repeater stations were equipped to add/drop channels, allowing local traffic to be added/dropped off mid-span. The system was using the new (at the time) Solid State components, but to increase reliability the electronics were encased in airtight dry nitrogen enclosures.
As the WT4 system and its finicky waveguides was being perfected in the 1970s, Corning, a company then known for glass manufacturing, was able to demonstrate that by removing impurities in the glass, optical fibres could be produced with losses of 17 dB per kilometer. Shortly after they got it down to 4 dB per kilometer, and these values kept falling. While early fibre optics were not without their challenges, fibre could be installed in existing conduits, without specialised pipe-pushing and welding equipment, and at a much lower cost per meter.
While WT4 provided bandwidth in numbers unseen before, it’s high cost to deploy and many limitations saw it fade away into the annals of history.
Even in the 1960s Bell Labs staff knew the case for mmWave wasn’t yet financially viable, but built it for a future that didn’t come the way they expected.
So what can this 60 year old tale of engineering teach us?
Bell Labs were pinning their hopes on mmWave to provide limitless bandwidth – and it could, but was faced the ultimate issue of not being financially viable. Here we are 60 years later, and again, many telcos are also pinning a lot of hope on the higher bands.
As was the case in the the 1960s, there is no doubt the bandwidth available for 5G in mmWave is huge (thanks Shannon–Hartley theorem), but it comes with equally vexing challenges to do with propagation and cost of the rollout.
Only time will tell if 5G’s mmWave endeavours end up seeing wide scale adoption.
Want more telecom goodness?
I have a good old fashioned RSS feed you can subscribe to.