Friday the 13th, commonly known as Black Friday, is believed to bring bad luck. While I’m not a superstitious person, and I don’t believe in a fear of the number 13, it seems that something was special about this Friday.
Recently, Sydney has been hit with many stormy and showery days. Just a week earlier on the 6th, a shelf-cloud rolled over our area bringing a very ominous, deep dark sky in the middle of an otherwise bright day.
It didn’t pour down on us then, but with such weather around, it’s understandable if something were to eventually happen. It’s a matter of statistics – in fact, if you look at the MV lines in the photo, the leftmost seems to have a kink in it which may suggest it was once hit by lightning.
Given how much I’m into technology, and how much we rely on technology, having reliable power is very important. In fact, right now there would be a number of full-time 24/7 operating servers for various tasks, including experiments intended for my PhD that need full-time operation and quick response if anything were to go wrong.
Luckily, for me, power outages are fairly rare where I am. If I’d have to make a guess, we’d experience a power outage of greater than a minute only once every five years or so. Part of the reason is that the LV distribution is completely underground where I am, as our complex is served from a dedicated padmount, and so outages generally only happen when there is a fault on the incoming MV lines. That being said, our MV lines are not without incident, especially around stormy days, as it seems they have been struck or damaged from time to time, but so rare as not to worry. Not much will protect you from a stray car hitting into a pole or substation, for example.
In fact they are so rare that I had given up two of my UPS units, as they were never really “saving” any equipment from an unexpected power outage as they’d often fry their sealed-lead-acid cells before even being called on for the first time. Should anything serious arise, the chances a UPS would tie things over for long enough to matter is slim to none – in fact, I’ve had outages arise from UPS faults more often than a power outage, namely when a self-test goes wrong and the changeover back to mains was not performed fast enough to stop the attached equipment from unceremoniously being rebooted.
In the afternoon of the aforementioned Friday, I was home, and a storm was thundering overhead. In a bit of a joking manner, I sent one of my friends an instant message saying …
I then went for a short nap after the storm had passed, and I was quite pleased nothing had actually happened. I didn’t expect anything to happen, as it didn’t sound very ferocious, and more dangerous storms had passed earlier in the week with little effect.
I woke up from my nap with a little unease at about 4:30pm. The nagging thought of the power “might” go out lingered in my head … but we settled for dinner at 6:00pm unaware that we would literally be experiencing a black(out) Friday night.
The Power Goes Out
The sky was still somewhat bright with clouds diffusing the sunlight, with the weather being mostly calm with some sprinkles at that time. We had almost finished dinner when around 6:30pm …
… flicker … silence (as the machines, and TV playing loudly at the dinner table shut down) … then flicker again … a whirring comes from the laser printers … and then another flicker … silence. The CFL globe above the dinner table strobes slightly, but is running severely dimmed.
We are in a brownout. The undervoltage condition is so severe that most equipment ceases to operate. The Wi-Fi AP’s? Down. The servers? Gone. The ADSL Modem? Nup. Network switches? Dark with an occasional flash.
For a second, I was confused. I looked towards the upstairs stairwell light, and it seemed just as bright as before. The microwave, downstairs, however had a blank display. Running upstairs, my PhD experiment powered from a Manson HCS-3102 was still running, but the computer and multimeter were down. Was this a problem with our cabling? Was it a lost phase?
This was quickly ruled out when I booted up the Tektronix PA1000 Power Analyzer (which uses an XP Power universal-input switchmode module as a power supply), which told me we had 55Vrms across active-neutral. We only had one phase, so we could rule out the loss of a phase (at least, to our premises). I grabbed a USB key hoping to hit the log button on the PA1000 and get some readings down, but as soon as I hit the button, the voltage fell enough that the analyzer turned off for good without so much as writing anything to the stick. Now I was really “in the dark”, with all the remaining lights gone.
Quickly, I used the dim light of the sky to gather the torches, radio and batteries ready for a potentially prolonged outage and shut off the most sensitive equipment in case they would be affected by any fluctuations in the line condition.
The Engineer Seizes the Opportunity to Troubleshoot
The last time we had a power outage a few years back, I noticed that it was not a complete black-out but instead a severe undervoltage condition. I didn’t have enough gear to safely monitor the mains during a power outage at the time, but it seemed to me that there was at least a good 40-50v across active and neutral to the point that some of the hardier switchmode supplies were noisy (low switching frequency, high duty cycles) but operated as if nothing else was wrong. This seemed to be the case early on, but now, I’m not so sure.
Luckily, I was better equipped this time around and I decided to make this an experiment in itself. But how does one run an experiment when the mains’ are out? Well luckily, I had the Keysight U1461A DMM + U1117A Bluetooth link handy, which were both battery operated, which I hooked to the mains pretty much immediately. I also grabbed my phone, set it to tether, and ran the Keysight Mobile Logger app along, with a Xiaomi power bank keeping it happy. I also grabbed the HP Stream 8 tablet to get online and send out my obligatory social media posting.
Thanks to this set of equipment, I was able to finish my dinner while watching the line voltage. This made for a much more thrilling, and interesting, dinner than it would have otherwise been. It did feel rather strange, because when the power did go out, it went in several stages when the weather was fairly calm. It likely meant that some storm-weakened tree may have fell onto the lines, conducted current to where it shouldn’t go, compromised insulation etc. I would have expected even that to mostly be transient, resulting in either quick restoration or complete loss, rather than the 50v RMS initially seen, and 15v RMS remaining voltage after. Was there a breaker somewhere that didn’t open as it should? Or a stable-ish high resistance path somewhere? Not being in the industry, what I was seeing was slightly cryptic.
Very soon, half an hour passed and the power was still out. Strangely, the line voltage continued to hover at 15v RMS, which wasn’t entirely “dead”, and allowed for some hardy switchmode supplies to “strobe” on and off and make awful “dying cat” noises. Technically, according to the definitions set by Ausgrid, <10% of nominal (i.e. 23V RMS) is considered a loss of supply, but I was expecting to see something much closer to zero. I wondered if this was the effect of local grid tie PV inverters testing the line, but of course, they shouldn’t be pumping power into the grid.
Needless to say, I kept the Mobile Logger app logging while I seeked to investigate. Because it was running on my phone, I decided to leave the house without a phone and take a walk around. Surely enough, the whole complex and a few streets down in a wedge shape were blacked out. The sad part, was that just barely 200m either side of the complex saw houses with their lights running just fine – they were on different MV circuits from what I can tell, and it was only a long wedge-shaped sliver in the middle that was out.
While wandering around, I saw an Endeavour Energy ute pull up to a padmount substation just up the street, probe a few points with a meter, and then hastily drive off further down the road. He saw me looking at him with piercing eyes, but he didn’t say anything and kept speeding away from me. At least, by that point, I knew that someone was on the case.
Thanks to a friend, I was told that Ausgrid had recently just restored power in a neighbouring suburb just an hour earlier.
As it turns out, it wasn’t long after my premonition posting that Ausgrid had alerted that outages happened in Bass Hill (just a few km away), and they had restored most of the customers an hour before we went out. As we were not in an Ausgrid area, we had to instead go to visit Endeavour Energy’s pages to find out what was going on.
Interestingly, even doing this, I ran into a technical fault – namely that Endeavour Energy’s main pages are served https, but the frame for their Outage Management System is an insecure http service, thus Firefox refused to even show it, and only by using IE did I see the warning and realize what was happening.
Sure enough, after a while, the details turned up with an estimated restoration time of 11:30pm – an estimated down-time of five hours. Hardly the most convenient result, but at least there was some information … and I suppose an understandable outcome where there are multiple outages to attend to after-hours.
Needless to say, the lack of power made for a relatively boring evening for most. I was going to have a relaxing evening blogging, but without my desktop filled with media assets, it wasn’t going to happen. The 3G wasn’t holding in that well either, and the bandwidth quota was always a worry, so that was a no on the “cloud storage” and streaming entertainment. I had wished I had loaded on some content onto my tablet or laptop – I’ve been living with the luxury of NAS storage, but it’s no use when there’s no network!
It was warm, and relatively uncomfortable evening. We couldn’t even take a shower, since we relied on gas instantaneous hot water, which had an electric ignition and control system. In the end, I took out a DAB+ radio to listen to some music … and waited the time out lying on my bed thinking of how to plan better for the future and then simultaneously realizing that this is so infrequent that it was pointless to worry. Instead I was very happy to have bright LED flashlights, and decent rechargeable batteries. Imagine waiting it out with an incandescent flashlight or … a candle.
Luckily for us, at 9:14pm, the power came back to loud cheers from our neighbours. Around two and three-quarter hours of black-out felt like an eternity, but hey, I had to wait it out so that I could restart all my services! Only after that did I have some time to look at the data collected.
The Experimental Results
On a full overview of the collected data, it’s a little regretful that I didn’t spring into action a little quicker to capture the initial brown-out. I suppose that’s how things go when you plug in your gear after an event. The capture shows a period of severe undervoltage, followed by a “pure” blackout segment, then the restoration itself. It doesn’t look that special at the full-scale, so lets focus on the individual sections.
Prior to the recording, there was a severe undervoltage condition with about 50v RMS seen on the PA1000 before it completely shut down. The mystery of the lights becomes very obvious when we compare the wattage profile of a CFL vs LED with voltage swept downwards by variac:
The wattage is represented in blue, and you can see the CFL on the left has fairly primitive regulation, so its output power is almost linear with voltage down to a point of about 50v RMS where there is some arc instability. The LED instead has an almost constant power profile for 100v RMS and above, and a linear profile for 30v RMS upwards. With this information, it’s clear that the brown-out dimming would have been much more obvious on the CFL as it is less well-regulated, where the LED is more actively regulated and can compensate for wide voltage changes, and can seem perfectly bright even when the voltage is 50% of that of nominal.
This is exciting and also distressing – as it can place higher loads on the grid when an undervoltage condition exists, which normally occurs due to high loading conditions. It basically increases the current consumption to compensate for falling input voltage. I suppose as more devices become switching regulator based, the worse power quality they can tolerate due to the active regulation, and the less effect brown-outs will have.
In the deep undervoltage section, we can see the voltage had a mostly slow downward trend, but there is a “notch” for almost five minutes which suggests something may have happened. Maybe a few branches burnt out, and other branches fell on top, or something moved somewhere? This deep undervoltage section lasted for the first hour or so of the outage, suggesting that no real work was done, probably mainly reporting lag time, response time lag, and investigation.
The downward slope seems interest as well – is it a representation that somehow voltage is resistively coupled into the line, when it shouldn’t be, and the resistance is increasing? Maybe a pole that’s conducting current due to being wet, drying out due to the heating? A quick look at the sunset time tells me that the sun had set at 7:34pm, so it isn’t likely due to the grid-tie inverters around the area.
Regardless, it seems likely that whatever is providing this residual voltage must have conducted a decent amount of current, as many of my switching supplies were squealing and trying to run …
Once whatever was supplying the 15v RMS was removed from the line, the line voltage fell to something much closer to what I expected for a blackout. Now the line hovered about 45mV RMS, a very low level. Even in that, it seems that some work is evident, namely, at about 8:05pm, a small transient spike is seen, then the voltage falls a little, followed by another drop in voltage a minute later. I suspect this is a few isolation events happening – linesmen may have opened up a pole-top switch here and there, or isolated a neighbouring phase reducing the inductive pick-up maybe? Maybe at the end, they installed a shorting jumper clip to discharge the line as well, resulting in the roughly 5mV background which is likely to be induced EMF/RF (basically lines acting like radio antennas).
Then, at 9:14pm, power was restored in a single action without any further issue. I continued monitoring in case there was going to be another short interruption or transient problem, but there wasn’t. As is regular for my area, the voltage hovers about 244v RMS.
A power outage is very annoying especially in our age of technology reliance. Almost all conveniences are gone without electricity, but thankfully, it is a rare occurrence. That being said, I decided to turn it into an opportunity to see what was happening using some test equipment, and while I didn’t expect to see anything interesting, the data seems to imply certain remedial actions were happening to the grid itself, as seen from my powerpoint at home.
Now hopefully it won’t go out again anytime soon.