At last! Another random post … code-word for “frustrations of the past few weeks that don’t quite deserve their own posting”. This time, it’s crap cables (again), repairing a problematic USB charger, building my own USB “condom” and being subject to Microsoft’s idea of an “upgrade”. I bet you can already smell the frustration …
More Crap Cables: DisplayPort and Network
I was in need of a pair of decent mini DisplayPort to DisplayPort cables and so I thought I’d chance it on eBay to grab something at a decent price. I came across a listing that was priced at around AU$4.50 for a cable, which didn’t seem entirely unreasonable, with images that matched much more expensive listings. The claim was the cables were “DisplayPort 1.2” capable, not that the cables should be sold this way. That being said, this implies the cable is HBR2 rate capable – as most standard DisplayPort cables should be.
Alas, when it arrived, I was optimistic of my chances. But after handling the package, my optimism was dampened as the cable felt rather surprisingly thin and light-weight.
Looking at it from the outside, everything looks rather swell. Gold plated ends, well-formed plugs – almost everything you might want it to be. That is, except for any markings of certifications on the cable and the thickness!
It had the right ends, so I decided to give it a try. Oh boy was it a disappointment. The seller claimed it would do 2K. In truth, it couldn’t even provide a [email protected] image stably, instead occasionally throwing up a noisy patch or blanking out/freezing entirely. Switching to [email protected] was a frozen image with occasional blacking out. Trying [email protected] was entirely futile as the monitor would not even sync.
Contacting the seller, they claimed that they made a mistake and the cable was only good for 1080p (yeah, right), but I obtained a refund immediately. Lets take a look at what you get inside …
Trying to attack the cable at the connector ends proved fruitless – the connectors are decent and the cables are moulded into the connectors which ensures better connections.
Undressing the cable, we find the outer sheath along with a thin foil shield. No braiding, nor drain wire that I could see.
Inside were six wires for the various auxiliary channel and power connections, along with four individually foil-screened differential pairs which make the four data lanes.
The whole assembly is lightly twisted, and doesn’t seem too bad at first, but maybe they don’t have the right impedance or sufficient shielding so the signal integrity is poor even at the lower resolutions. So while they may have used the right materials (to a degree), made a cable and sent it all the way to my house, it’s absolute rubbish that will just end up in a bin. What a waste and likely all because someone in China decided to save a few bucks.
Later on, I was a bit short of network cables so I grabbed some off eBay – just simple 1.8m straight leads. It’s quite hard to avoid this kind of “flat” cable – what happened last time was the abomination that was a notwork cable. This time, I received a flat cable that claimed it was a Cat.6 Flat Cable.
In truth, I find this claim incredulous. The cable itself is indeed flat and thin. If you dare to plug it in, it does achieve a 1Gbit/s link and seem to work just fine.
But the truth is that this cable is very unlikely to meet standards at all. In proper Cat 6 cable, there are strict dimensional requirements and even the incorporation of a separator between pairs to ensure the cross-talk is well controlled. In a flat cable like this, it doesn’t seem possible to achieve this. More than that, the thinner conductors used are likely to cause more signal loss. However, if the loss and cross-talk is not too bad, devices may continue to operate although poorly.
In my case, I was noticing every few minutes, the link of the NIC would drop for a fraction of a second and re-establish at gigabit speeds, repeating throughout the day causing stutters and interruption to real-time streams. If I hadn’t been watching the link lights on the switch flicker off and on, or checked the event log, I wouldn’t have known.
Firing up the Intel NIC diagnostics, the cable quality was rated as faulty with a fault at 0m – but the NIC continued to establish a link as best as it could. If you have these cables, maybe they’re good enough for 100Mbit/s, but they could be quite marginal at 1Gbit/s. Even my old round Cat5 cables passed this test with excellent quality even at moderate lengths, so this particular flat cable had to be pretty bad to fail.
The Solution: A Good DisplayPort Cable (at a Cost!)
As I desperately needed a decent mDP to DP cable, I went down to a local computer shop to grab one for AU$12. It was more expensive, but at least it should work.
This one claimed to be 4K capable, but superficially, I couldn’t really tell aside from the writing on the packet.
The cable did come with a guarantee, so at least I was covered there.
Suffice it to say, the cable looked pretty similar and its thickness was only marginally thicker than the other. The big difference was the cable had printing claiming it was DisplayPort cable, but anyone can pretend, right?
The connector ends are similar, but there was a bit of glue residue on the mDP side which didn’t inspire confidence.
The part number did reinforce the fact this cable was intended to be 4K capable – and indeed, when I tried it out, it actually worked just fine at [email protected] So I guess spending a few extra dollars really did make the difference.
USB Charger Repair
Just the other day, I needed to access the internet through my backup WAN connection when I discovered it had failed. It was running on a Raspberry Pi and had over 68 days of uptime by then, so why was it offline?
I walked up to the board to find the power LED slightly dim and all the other lights out. The power supply was making a bit of a high-pitched squeak. Uh oh, USB charger failure!
Since the EU mandated the use of USB chargers, it’s saved a lot of waste in incompatible chargers going to landfill and has resulted in more efficient chargers being widely used. The downside is that switch-mode supplies do have a finite lifetime, with their cost-cutting design approaches and high stress on components being a contributor to their eventual failure. As a result, sometimes charging issues are caused not by cables but the chargers themselves – hence the proliferation of “charger doctor” style devices.
But more than that, I suspect that some chargers were not made for a heavy duty cycle application – charging often happens only for a few hours each day, whereas repurposing them for running SBCs results in high loads around the clock.
Regardless, this Kogan-branded KAUSBXXADPB 5v/2A USB charger has been with me for about three years and has served me well. Its compact size has been quite convenient and the shell makes it look like a Ktec supply (which have been quite good in general).
This one was noticeably sick – at a draw of just 50mA, it was only producing about 4V. No prizes for guessing the cause – bad capacitors. Opening this thing up was not pleasant, as it was not designed to be serviced and required a good squeeze in the “vice of knowledge” (as is termed by bigclivedotcom).
The charger is built on a single-sided paper-type PCB. The PCB is marked with a date of 22nd February 2013, PCBSam0076 Rev.A and e-mail [email protected] The design uses a Shenzhen Strong Link Electronics SL2128C controller with optoisolated feedback. The input is fused, going through four separate diodes as a bridge, with an inductor for some filtering.
The underseide shows good separation between primary and secondary, marked 145P1R3250A HH.
The back-plate of the case where the power connections reside is insulated by a piece of plastic-coated paper. Surprisingly, there are holes which almost-align with the soldered mains connections, seemingly defeating the purpose. I suppose it was designed to have the insulated wires pass through the holes, but then the mains connections loop around the top of this PCB instead.
The obvious bad capacitor is a Jwco 1000uF 10V capacitor on the output. Close inspection shows that the top is bulged but did not vent, instead the rubber bung at the base has blown out.
The other capacitors used are not particularly reputable either. The primary side has two parallel 6.8uF 400V ChengX capacitors, with a KSJ 470uF capacitor and another Jwco 47uF capacitor to round out the electrolytics.
The inside of the top case has some oily residue due to the accumulated electrolyte that must have vented over the years. Surprisingly, the failure seems rather monotonic, with the Pi rebooting unexpectedly once, then going down about 1.5 hours later never to be heard from again. As a result, thankfully, the SD card did not get corrupted.
Most people would take a look at a dead charger and just chuck it out – probably a fair call given the low price, however, there are many shoddy chargers out there as well, so buying a cheap eBay special almost never bodes well. Instead, I decided not to waste this and give it a repair – replacing the capacitors with spare Panasonic, Rubycon and Nichicon capacitors. The only capacitors I didn’t replace were the primary side ones – I don’t hold stock of higher voltage-rating capacitors normally, so I just left them as is.
The clearance was a bit of a problem for one capacitor, so I decided to bend it over the USB port.
As the case was not designed to be serviced, I wrapped it up tightly in electrical tape as a way of reassembly. It’s probably not as safe as it was before – but as only I will be using it in a fixed location out of reach – I didn’t feel this was a big issue.
After the repair, the voltage is stable at a load of 200mA, provided by a USB LED keyring device. That’s good enough to pass.
As for the capacitors, the two Jwco capacitors fared poorly – 1000uF measured just 25.48uF, and the 47uF just 35.12uF which are out of tolerance. The unknown KSJ capcitor was quite acceptable – measuring 488.6uF on a rating of 470uF which was rather unexpected. At least this fix got me back online in the space of about an hour.
DIY: USB Condom
I’ve not been one to advocate plugging mobile devices into random USB ports – it’s a bit of a risk and it could even result in damage if the port is miswired or the power is not to specification. There has been other concerns such as unintended malware installation, compromise of the operating system on your mobile device and data exfiltration.
More than this, I have found it rather frustrating that some of my devices just won’t pull high charging currents without confirming it is plugged into a charger of the right sort – e.g. USB Battery Charging Standard calls for shorting D+ and D- pins, whereas some of my other chargers do other things.
As a result, I decided to quickly kill two birds with one stone by building what is often termed a “USB condom” – this one was done in a “spur of the moment” as a 5-minute project.
The first step is to grab some connectors off the shelf – luckily for me, I’ve got a few Molex products from previous orders lying about. I found that on the A-side, it’s easy to grab some pliers to pull out the D+ and D- pins from the shell, thus guaranteeing no data connectivity on the upstream A connector. This will, however, also disable the ability for Qualcomm Quick Charge to negotiate charging voltage.
Then, making sure I had the right orientations to ensure the correct polarity, I bent the D+ and D- pins from the receptacle together and soldered that together. I tinned the 5V and GND leads, shoved it into the plug and melted the tinned solder with a copious application of the hot air gun (also melting some plastic in the process).
I then soldered the metal shells together on both sides – this is for mechanical support to create a “rigid” adapter which won’t stress the thin contacts. This construction also means no PCB – saving parts and reducing resistance, important for maintaining voltage and charge speed.
From there, it’s a simple case of wrapping the exposed bits in electrical tape and it’s done – just be sure you have the connectors the right way around otherwise you will end up supplying reverse-polarity to a device which may ultimately kill it.
Windows 10 October Update
For Christmas, Microsoft wisely decided to let me get the October update just this morning. This update was fraught with negative press, namely that it would delete your files or cause havoc with display drivers. However, being someone to receive the update late in the cycle, I thought Microsoft had the issues fixed. Sort of.
As usual, a “feature update” is always a large download – consuming about a quarter of my LTE monthly quota unannounced. Thanks Microsoft. It also takes a while to install, as it’s like installing a whole new version of Windows. After around four or five reboots, we’re back in … except … it’s not quite the same Windows.
After the update, I found the boot times to be significantly slower than before. Bummer. Some file associations which were just fine before the update now ask whether I want to keep using that particular app – why? It’s not like I’ve changed my mind.
But more than that, I found myself without any network connection at all. I thought I found the solution to my networking issues in VLAN operation through Intel Advanced Network Services (ANS). Instead, I rebooted to find all my VLAN configurations were lost.
Not the end of the world – I would reconfigure them again, right? Wrong. As per this article, I was out of luck. I could add the VLANs again, but all adapters were then disabled including the untagged one and no amount of binding/unbinding/uninstalling/reinstalling protocols or drivers would fix it. Only by removing all VLANs to return the adapter to its basic configuration would allow any communication on the NIC at all.
I thought to myself that I might just need a driver update – this might just be a compatibility issue with the new update. I grabbed the latest drivers, published just this month and installed it including ANS.
The result was no ANS tabs in the device at all, no more advanced configuration available. I even checked the Proset Adapter Configuration Utility and there is no VLAN support at all.
So, this upgrade is practically a downgrade – how could it break something that was working fine for months prior? Something so basic as VLAN operation?
Right now, I’ve had to resort to using three NIC ports, connected to three ports of my TL-SG105E to perform “hardware” VLAN tagging and untagging independent of the OS in a way that could be considered reliable. It’s a mess of cable and a needless complication, but my VLANs are a way of life now and I can’t exactly let them go. I did considering spawning a VMWare instance with a few virtual network adapters, but I couldn’t fathom losing access to the desktop if the VM OS didn’t boot.
My other option would be to roll-back the update, but as we all know, Microsoft only ever looks forward. Eventually, I would still have to apply the update. If I rolled back now, it will only mean eating up my LTE quota again a few months down the line when either another update comes out or it is decided that I must have the update to get future updates.
So thanks Microsoft … for breaking my machine today. Just what I needed for Christmas.
Phew. Another post of random is over and I feel like I’ve been able to vent my frustration, which is a healthy thing to do. Hope you enjoyed this random post and hope to get a few more posts out over the holiday season when I might (finally) get some time!