Nick vs Networking

Latest Posts

Australia’s East-West Microwave Link of the 1970s

On July 9, 1970 a $10 million dollar program to link Australia from East to West via Microwave was officially opened.
Spanning over 2,400 kilometres, it connected Northam (to the east of Perth) to Port Pirie (north of Adelaide) and thus connected the automated telephone networks of Australia’s Eastern States and Western States together, to enable users to dial each other and share video live, across the country, for the first time.

In 1877, long before road and rail lines, the first telegraph line – a single iron wire, was spanned across the Nullabor to link Australia’s Eastern states with Western Australia.

By 1930 an open-wire voice link had been established between the two sides of the continent.
This was open-wire circuit was upgraded a rebuilt several times, to finally top out at 140 channels, but by the 1960s Australian Post Office (APO) engineers knew a higher bandwidth (broadband carrier) system was required if ever Standard Trunk Dialling (STD) was to be implemented so someone in Perth could dial someone in Sydney without going via an operator.

A few years earlier Melbourne and Sydney were linked via a 600 kilometre long coaxial cable route, so API engineers spent months in the Nullarbor desert surveying the soil conditions and came to the conclusion that a coaxial cable (like the recently opened Melbourne to Sydney Coaxial cable) was possible, but would be very difficult to achieve.

Instead, in 1966, Alan Hume, the Postmaster-General, announced that the decision had been made to construct a network of Microwave relay stations to span from South Australia to Western Australia.

In the 1930s microwave communications had spanned the English channel, by 1951 AT&T’s Long Lines microwave network had opened, spanning the continental United States. So by the 1960’s Microwave transmission networks were commonplace throughout Europe and the US and was thought to be fairly well understood.

But soon APO engineers soon realised that the unique terrain of the desert and the weather conditions of the Nullabor, had significant impacts on the transmission of Radio Waves. Again Research Labs staff went back to spend months in the desert measuring signal strength between test sites to better understand how the harsh desert environment would impact the transmission in order to overcome these impediments.

The length of the link was one of the longest ever attempted, longer than the distance from London to Moscow,

In the end it was decided that 59 towers with heights from 22 meters to 76 meters were to be built, topped off with 3.6m tall microwave dishes for relaying the messages between towers.

The towers themselves were to be built in a zig-zag pattern, to prevent overshooting microwave signals from interfering with signals for the next station in the chain.

Due to the remote nature of the repeater sites, for 43 of the 59 repeater sites had to be fully self sufficient in terms of power.

Initial planning saw the power requirements of the repeater sites to be limited to 500 watts, APO engineers looked at the available wind patterns and determined that combined with batteries, wind generators could keep these sites online year round, without the need for additional power sources. Unfortunately this 500 watt power consumption target quickly tripled, and diesel generators were added to make up any shortfall on calm days.

The addition of the Diesel gensets did not in any way reduce the need to conserve power – the more Diesel consumed, the more trips across the desert to refuel the diesel generators would be required, so the constant need to keep power to a minimum was one of the key restraints in the project.

The designs of these huts were reused after the project for extreme temperature equipment housings, including one reused by Broadcast Australia seen in Marble Barr – The hottest town in Australia.

Active cooling systems (Like Air Conditioning) were out of the question due to being too power hungry. APO engineers knew that the more efficient equipment they could use, the less heat they would produce, and the more efficient the system would be, so solid state (transistorised devices) were selected for the 2Ghz transmission equipment, instead of valves which would have been more power-hungry and produced more heat.

The reduced power requirement of the fully transistorized radio equipment meant that wind-supplied driven generators could provide satisfactory amounts of power provided that the wind characteristics of the site were suitable.


So forced to use passive cooling methods, the engineers on the project designed the repeater huts to cleverly utilize ventilation and the orientation of the huts to keep them as cool as possible.

Construction was rough, but in just under 2 years the teams had constructed all 59 towers and the associated equipment huts to span the desert.

When the system first opened for service in July 1970, live TV programs could be simulcast on both sides of the country, for the first time, and someone in Perth could pick up the phone and call someone in Melbourne directly (previously this would have gone through an operator).

PMG Engineers designed a case to transport the fragile equipment spares – That resided in the back of a Falcon XR Station Wagon

The system offered 1+1 redundancy, and capacity for 600 circuits, split across up to 6 radio bearers, and a bearer could be dedicated at times to support TV transmissions, carried on 5 watt (2 watt when modulated) carriers, operating at 1.9 to 2.3Ghz.

By linking the two sides of Australia, Telecom opened up the ability to have a single time source distributed across the country, the station in Lyndhurst in Victoria, created the 100 “microseconds” signal generated by a VNG, that was carrier across the link.

Looking down one of the towers

Unlike AT&T’s Long Lines network, which lasted until after MCI, deregulation and the breakup off the Bell System, the East-West link didn’t last all that long.

By 1981, Telecom Australia (No longer APO) had installed their first experimental optic fibre cable between Clayton and Springvale, and fibre quickly became the preferred method for broadband carrier circuits between exchanges.

By 1987, Melbourne and Sydney were linked by fibre, and the benefits of fibre were starting to be seen more broadly, and by 1989, just under 20 years since the original East-West Microwave system opened, Telecom Australia completed a 2373 kilometre long / 14 fibre cable from Perth to Adelaide, and Optus followed in 1993.

This effectively made the microwave system redundant. Fibre provided a higher bandwidth, more reliable service, that was far cheaper to operate due to decreased power requirements. And so piece by piece microwave hops were replaced with fibre optic cables.

I’m not clear on which was the last link to be switched off (If you do know please leave a comment or drop me a message), but eventually at some point in the late 1980s or early 1990s, the system was decommissioned.

Many of the towers still stand today and carry microwave equipment on them, but it is a far cry from what was installed in the late 1960s.

Advertisement from Andrew Antennas


East-west microwave link opening (Press Release)

Walkabout.Vol. 35 No. 6 (1 June 1969) – Communications Across the Nullabor

$8 Million Trans-continental link

ABC Goldfields-Esperance – Australia’s first live national television broadcast

APO – Newsletter ‘New East-West Trunks System’ 50 years since Project Australia

Whirlpool Post

TJA Article on spur to Lenora

NBNco’s FTTN – What’s in the box?

Note: All information contained here is sourced from: Photos provided by NBNco’s press pages, Googling part numbers from these photos, and public costing information.

This post covers the specifics and capabilities of NBNco’s FTTN solution, and is the result of some internet sleuthing.

If some of the info in here is now out of date, I’d love to know, let me know in the comments or drop me an email and I’ll update it.

FTTN in Numbers

A total of 24,544 nodes have been deployed upon completion of roll out. Each node is provisioned with 384 subscriber ports.

The hardware has 10Gbps shared between the 384 subscriber lines, equating to 208Mbps per subscriber.

Construction costs were $2.311 billion and hardware costs were $1.513 billion,

For the hardware this equates to $61,644 per node or $160 per subscriber line connected (each node is provisioned with 384 ports)

Full cost for node including hardware, construction and provisioning is $244,150 per node, which is $635 per port.

To operate the FTTN infrastructure costs $709 million per year (Made up of costs such as power, equipment servicing and spares). This equates to $28k per node per annum, or $75 per subscriber. (This does not take into account other costs such as access to the copper, transmission network, etc, just the costs to have the unit powered on the footpath.)


Inside the FTTN cabinets is a Alcatel Lucent (now Nokia) ISAM 7330 cabinet mounted on it’s side,

On the inside left of the door is a optic fibre tray where the transmission links come into the cabinet,

On the extreme left is a custom panel. It contains I/Os that are fed to the 7330, such as door open sensor, battery monitoring, AC power in, SPD and breaker.

Connection to subscriber lines happens on a frame at the end of the cabinet.

Alcatel Lucent ISAM 7330 FTTN

NBN co’s nodes are made up of Alcatel Lucent (Now Nokia) ISAM (Intelligent Services Access Manager) 7330 FTTN rack mounted it’s side.

1GFC (General Facilities Card)Power and alarm management
2NT Slot (NANT-E Card)Main processing and transmission
3NTIO Slot (NDPS-C Card)VDSL vectoring number-crunching
4NT Slot (Free)Optional (Unused) backup Main processing and transmission
5-12LT (NDLT-F)48 Port VDSL Subscriber DSLAM Interfaces
Slot numbering is just counting L to R, ALU documentation has different numbering

First up is the GFC (General Facilities Card) which handles alarm input / output, and power distribution. This connects to the custom IO panel on the far left of the cabinet, meaning the on-board IO ports aren’t all populated as it’s handled by the IO panel. (More on that later)

Next up is the first NT slot, there are two on the 7330, but in NBN’s case, only one is used; the second can be used for redundancy if one of the cards were to fail, but it seems this option has not been selected. In the first and only populated NT slot is an NANT-E Card (Combined NT Unit E) which handles transmission and main processing.

All the ISAM NANT cards support support RIPv2! But only the NANT-E card also supports BGP – Interestingly they don’t have BGP on all the NANT cards?

To the right of that is the NTIO slot, which has a NDPS-C card, which handles the vector processing for VDSL.

Brief overview of Vectoring: By adding vectoring to DSL signals allows noise on subscriber loops to be modeled, and then cancelled out with an integrated anti-phase signal matching that of the noise.

The vectoring in VDSL relies on pretty complex number crunching as the DSLAM has to constantly process the vectoring coefficients which are different for each line and can change based on the conditions of the subscriber loop etc. To do this the NDPS-C has two roles;
The NDPS-C’s Vectoring Control Entity performs non-real time calculations of vectoring coefficients and handles joining and leaving of vectored VDSL2 lines.
While the NDPS-C’s Vectoring Processor performs the real time matrix calculations based on crosstalk correction samples for the VDSL symbols collected from the subscriber lines.
The NDPS-C has a Twinax connection to every second LT Card.

After the NTIO slot is the unused NT slot.

Finally we have the 8 LT slots for line cards, which for FTTN is using the NDLT-F are 48 port line cards.

The 8 card slots allows 384 subscriber lines per node.

These are the cards which the actual subscriber lines ultimately connect to. With 10Gbps available from the NT to the LTs, means each LT card with 48 subs so 208 Mbps per subscriber max theoretical throughput.

POTS overlay is supported, this allowed VF services coexisted on the same copper during the rollout. M / X pairs are no longer added inline on new connections. (More on that on cabling).

Power & Environment

The 7330 has a 40 amp draw at -48v would mean the unit consumes 1920w

The -48v supply is provided by 2x Eltek Flatpack2 rectifiers, each providing 1Kw each.

These can be configured to provide 1Kw with redundancy to protect against the failure of one of the Flatpack2 units, or 2Kw with no redundancy, which is what is used here.

On the extreme left is a custom panel. It contains alarm I/Os that are fed to the 7330, such as door open sensor, battery monitoring, etc.

It also is the input for AC power in, surge protection device and breakers.

I did have some additional information on the batteries used and the power calculations, however NBNco’s security team have asked that this be removed.


Incoming transmission fiber comes in on NBNco’s green ribbon fibre, which terminates on a break out tray on the left hand side wall of the cabinet. Spliced inside the tray is a duplex LC pigtail for connecting the SMF to the 7330. I don’t have the specifics on the optics used.

Subscriber lines come in via an IDC distribution frame (Quante IDS) on the right hand side end of the cabinet, accessed through a seperate door.

This frame is referred to as the CCF – Cross Connect Frame.

There are two sets of blocks on the CCF, termination of ‘X’ and ‘C’ Pairs.

‘X’ Pairs are the VF Pairs (PSTN lines) connecting to the pillar where they are jumpered back to the ‘M’ pairs back to the serving exchange,

‘C’ Pairs are the pairs containing combined VDSL & VF services to to the pillar where they are jumpered to the ‘O’ pairs which run out to the customer’s premises,

Tiny Pillars in the CAN

On the rare occasions I’m not tied to my desk, I’m out for a long run along some back roads somewhere.

Every now and then I come across these tiny telecom pillars for cross-connection (and don’t shoot at them) – I mostly find them around the edges of distribution areas.
I had some recollection that these were originally for trunk lines between exchanges (maybe there was some truth to this?), but some digging in old docs show these were just for interconnecting main or branch cables with distribution cables, in areas where the 600 and 1200 pair pillars / cabinets would be overkill.

They’re built like the 900/1800 pair cabinets, but just scaled down versions, supporting 1x 100 pair main cable, 1x 100 pair distribution cables and 2x 50 pair distribution cables.

It seems like these were largely decomed when NBN took over, leaving most with a big X sprayed on them.

While I was looking through the docs I also found reference to a 180 pair pillar, which looked very similar, but I’ve yet to see any of them left in the wild. Better keep running ’till I find one!

Want more telecom goodness?

I have a good old fashioned RSS feed you can subscribe to.

You can get the latest posts dropped into your inbox by subscribing to our mailing list

I cross post some of this content to LinkedIn and Twitter.