A totally complete history and not just something I learned from a fellow phone nerd who’s been around a lot longer than me...
In the early days of telephony voice calls were made by signaling to an operator who would connect your call.
Around the turn of the century the first “automatic” exchanges began to open. This meant that a subscriber could complete their own call, by directly dialing the digits of the party the want to speak to, and getting through, without a human operator “plugging up” the call.
The first type of switches used to provide “automatic” exchange capability were Strowger type switches, they translated the pulses from a rotary dial phone into physical movements on a switch to find and select the line you want.
People who were born before touch tone phones can tell you about how you can dial a phone number without using the dial at all. By mashing the hook switch really quickly. If you want to dial a 3 you mash the hook switch 3 times, then wait a second, then to dial a 5 smash the hook switch 5 times, etc, etc.
A quirk of this is that higher numbers, are harder to dial, you just need one pulse of the hook switch in a second to dial a 1, but you need 10 pulses to dial a 0. This means a phone dial that’s running too slow can dial the lower digits, but not the higher digits, as it can’t pulse out the required number of pulses is the time allotted (a smidge over 1 second per digit).
Initially exchanges only connected local calls, but with the introduction of Subscriber Trunk Dialing (STD), subscribers could call from one exchange to another without an operator.
This led to national dialing plans being developed, ensuring uniqueness of numbers across the whole of the phone network, and where possible, lower numbers were used, Australia for example has area codes 02, 03, 07 and 08 (with the majority of the population living in the 02 and 03 area codes).
Now imagine you’re the government owned phone company, tasked with creating a single number for emergency services, 123, 111, etc, etc, are all taken up, as these are the most reliable numbers to dial and were used long ago.
Instead you go to the other end, the UK with 999 and Australia with 000 (911 is a different kettle of fish).
Except in New Zealand.
111 was specifically chosen to be similar to Britain’s 999 service, but NZ has some odd peculiarities.
The NZ dials are identical to the standard dial except for the finger plate label.
With pulse dialing, New Zealand telephones pulse “in reverse” to the rest of the world. Dialing 0 on a phone in the rest of the world, sent ten pulses down the line. But dialing a 0 on a phone in NZ sent one pulse down the line. The same for all the other numbers. The phones weren’t different – Just the labels.
Hence the reason why ‘Emergency’ services were on 111 in NZ (but actually pulsed 999) as the exchanges originated in those days from the UK where 999 was (and still is 999).
In the early years of 111, the telephone equipment was based on British Post Office equipment, except for this unusual orientation. Therefore, dialing 111 on a New Zealand telephone sent three sets of nine pulses to the exchange, exactly the same as the UK’s 999.
If you’re typing on a full size keyboard there’s a good chance that to your right, there’s a number pad.
The number 5 is in the middle – That’s to be expected, but is 1 in the top left or bottom left?
Being derived from an adding machine keypad, the number pad on a keyboard has a 1 will be in the bottom left, however in the 1950s when telephone keypads were being introduced, only folks who worked in accounting had adding machines.
So when it came time to work out the best layout, the result we have today was a determined through a stack of research and testing by Human Factors Engineering Department of Bell Labs who studied the most efficient layout of keys, and tested focus groups to find the layout that provided the best level of speed and accuracy.
That landed with the 1 in the top left, and that’s what we still have today.
Oddly ATM and Card terminals opted to use the telephone layout, rather than the adding machine layout, while number pads use the adding machine layout.
On the rare occasions I’m not tied to my desk, I’m out for a long run along some back roads somewhere.
Every now and then I come across these tiny telecom pillars for cross-connection (and don’t shoot at them) – I mostly find them around the edges of distribution areas. I had some recollection that these were originally for trunk lines between exchanges (maybe there was some truth to this?), but some digging in old docs show these were just for interconnecting main or branch cables with distribution cables, in areas where the 600 and 1200 pair pillars / cabinets would be overkill.
They’re built like the 900/1800 pair cabinets, but just scaled down versions, supporting 1x 100 pair main cable, 1x 100 pair distribution cables and 2x 50 pair distribution cables.
It seems like these were largely decomed when NBN took over, leaving most with a big X sprayed on them.
While I was looking through the docs I also found reference to a 180 pair pillar, which looked very similar, but I’ve yet to see any of them left in the wild. Better keep running ’till I find one!
It’s 1986 and you’ve got a 31 tons of copper, in the form of a giant 46 meter tall statue, that’s looking a bit worse for wear.
The Statue of Liberty has had water pooling in some areas, causing areas of her copper skin to corrode, and in some cases wearing all the way through.
On the other side of the iron curtain (it’s still up after all) there are probably quite a number of folks experienced in looking after giant statues, but alas, you’re the US National Parks Service and seeking help from the Soviets is probably a bad look.
The statue is made of Copper, and who knows more about copper than the phone company, with a vast, vast network of copper lines spanning the country?
So the National Parks Service called upon Bell Labs to help.
The Bell Labs’ chemists assigned to the project quickly pointed out that just replacing the corroded copper with new copper would hardly blend in – You’d have the shiny brown copper colour in the new sections, which wouldn’t match the verdigris that occurs through the oxidation of the copper, which would take years to form. (When she was delivered, the statue had a copper colour like you’d see in Copper piping, not the green patina we see today.)
Bell Labs staff looked at artificially creating the patina with acid solutions, to speed up the process to match the new copper with the old, but it was found it may cause structural weak points.
John Franey who was a technical assistant working at Bell Labs’ Murray Hill laboratories must have looked up at the roof of their buildings, constructed in 1941, and thought “Well that looks pretty close…”, so the naturally patinaed roof of Bell Lab’s New Jersey campus was peeled up and sent off for patching the statue.
Modern day roof at Murray Hill now with the verdigris that’s had 40 years to form
Murray Hill got a shiny new copper roof to replace the old green one they’d just given up, and the particles of copper corrosion scraped off the dismantled roof of a Bell Labs were mixed with acetone into a special spray used as concealer on the statue’s skin.
In exchange, Bell Labs staff were given some of the copper plates removed from the statue, so they could study the natural corrosion process in copper, in various weather conditions, which in turn would lead to a better understanding of how to build and maintain their copper plant.
Relocating vast numbers of subscriber lines is something to be avoided.
In 1929 Indiana Bell realized they needed a larger telephone exchange (“CO” to use the US term) to meet growing demand, and while there was vacant land around the current building, it wasn’t large enough to build on with the current building slap-dab in the middle of it.
So rather than relocate the subscriber lines to a newly built exchange, they just moved the exchange to the rear of the block, to free up space to build a larger one.
Over a 4 week period engineers shifted the working, 8 story steel and brick telephone exchange, still fully staffed, around to the other side of the block, without any interruptions to the subscribers served from the exchange.
So this is the story of how in the 1960s AT&T’s Bell Labs bet on millimeter waves being the communications medium of the future, 60 years before 5G’s millimeter wave hype.
AT&T’s Bell Labs were working with millimeter waves aka “mmWave” in 5G speak, way back in the 1960s, but using waveguides instead of air as the transmission medium.
AT&T saw the vast amounts of bandwidth available in these bands, and were keen to utilize it. So does history repeat? Are there lessons in here about cursed mmWave bands?
At the time, AT&T’s Long Lines network operated a vast point-to-point Microwave network, spanning across the United States. It operated from 3.7Ghz to 4.2Ghz capacity planners and engineers knew, even with the best multiplexing, you were limited to how many channels you could cram into 500Mhz of space, so Bell Labs started looking for solutions.
Almost from the first, however, the possibility of obtaining low attenuations from the use of circular-electric waves, carrying with it, at the same time, the possibility of extremely high frequencies and accordingly vastly wider bands of frequencies appeared as a fabulous El Dorado always beckoning us onward.
Initially Bell Labs researchers looked at higher frequencies for these wireless links, but after experimenting with using centimeter wavelengths through the air and the issues with attenuation from rain and water vapour, more research was done and Bell Labs decided to use waveguides as the transmission medium for these millimeter wave transmissions, instead of transmitting through the air.
An exploratory development effort was begun in 1959 on a system utilizing 2-inch waveguide and travelling-wave-tube repeater, but was abandoned in 1962 because of TWT cost and reliability problems and because the capacity exceeded then-current Bell System needs.
Thanks to the recent development of IMPATT diodes and Solid-State devices, it was not abandoned for long, and research was picked up again in 1962. At the time Bell Labs didn’t need the additional capacity, nor did they know when it would be commercially viable to start using millimeter waveguide in the field, but like the 5G operators today, Bell Labs staff had seen the massive amounts of bandwidth available at these higher frequencies, and were looking to exploit it.
The idea at Bell Labs was to send information through such waves not by wires or broadcast towers but by means of the circular waveguide, which had been developed down in Holmdel. “A specially designed hollow pipe,” as Fisk defined it, the waveguide was just a few inches in diameter, and lined inside with a special material that would allow it to carry very high-frequency millimeter radio wave signals.
Around the same time the first MASERS were coming onto the market, and light (free space optics) was being considered instead of electrical energy as a transmission medium. Test shooting lasers through the air highlighted the high optic losses in air, showing this wasn’t practical as a transmission method. While optic fibres existed at the time their losses were so high as to make transmitting anything over a few meters impractical.
All the millimeter wave transmission in waveguide research culminated in the creation of the WT4 system, in the late 1970s.
A 60mm waveguide was used
Advertisement from the April 12, 1971 issue of Time magazine
Using two levels of Phase-Shift keying they were able to provide 238k concurrent calls of capacity, which they calculated could be doubled by moving to four levels of PSK.
On a 14km test system (Bell labs used SI units), they calculated they had the ability to carry almost half a million concurrent voice calls, and with 274 Mbps of bandwidth (DS-4), which for the 1970s was no mean feat.
Artist’s impression of a repeater station
Channelisation was achieved through the use of giant filters of all different types and flavours, to break the test system up into 124 channels (59 in each direction + protection) on the frequencies showing the lowest-losses in experimentation.
AT&T had historically installed cables, but unlike cables, Waveguides can’t bend, so are more akin to installing water or gas pipes.
This meant the installation of the waveguides into the field leveraged processes from the pipeline industry that were adopted for installation of the waveguides.
“Push sites” selected where a steel sheath (which essentially equated to lengths of hollow steel pipe) could be pushed in under the surface of the earth, with extra pipe welded onto the end as it was pushed along.
This created a clear, straight, conduit for the waveguide to be installed. Due to the fragility of the waveguides themselves, they were laid within the pipe on roller bearings to support the waveguide and to help it slide inside the steel sheath.
In tests AT&T were pushing almost 2.5 Km of waveguide in from one site, with extra lengths of waveguide (9m lengths) being joined by the special “waveguide splicing vehicle” and pushed into the sheath.
Repeater stations were equally tricky, Luckily the WT4 system only required repeater stations at intervals up to 60Km, although when going over hilly terrain, the bends in the waveguide increased losses, so would require repeaters at shorter intervals (~50Km). The inability to bend the cables required a tunnel under each repeater station, through which the waveguides would run, with the repeaters tapping off the waveguides below, via a network of filters. Like the microwave network, some of the repeater stations were equipped to add/drop channels, allowing local traffic to be added/dropped off mid-span. The system was using the new (at the time) Solid State components, but to increase reliability the electronics were encased in airtight dry nitrogen enclosures.
As the WT4 system and its finicky waveguides was being perfected in the 1970s, Corning, a company then known for glass manufacturing, was able to demonstrate that by removing impurities in the glass, optical fibres could be produced with losses of 17 dB per kilometer. Shortly after they got it down to 4 dB per kilometer, and these values kept falling. While early fibre optics were not without their challenges, fibre could be installed in existing conduits, without specialised pipe-pushing and welding equipment, and at a much lower cost per meter.
While WT4 provided bandwidth in numbers unseen before, it’s high cost to deploy and many limitations saw it fade away into the annals of history.
Even in the 1960s Bell Labs staff knew the case for mmWave wasn’t yet financially viable, but built it for a future that didn’t come the way they expected.
So what can this 60 year old tale of engineering teach us?
Bell Labs were pinning their hopes on mmWave to provide limitless bandwidth – and it could, but was faced the ultimate issue of not being financially viable. Here we are 60 years later, and again, many telcos are also pinning a lot of hope on the higher bands.
As was the case in the the 1960s, there is no doubt the bandwidth available for 5G in mmWave is huge (thanks Shannon–Hartley theorem), but it comes with equally vexing challenges to do with propagation and cost of the rollout.
Only time will tell if 5G’s mmWave endeavours end up seeing wide scale adoption.
Want more telecom goodness?
I have a good old fashioned RSS feed you can subscribe to.