Teardown: Motorola CableComm Series II Cable Access Unit (MM1012A)

I remember when I was very young, peering outside the window and seeing a bunch of workmen and trucks pull up in front of the house. In a short time, they rigged a steel wire between the power poles, then pulled up a thick insulated cable, and used some rotating machinery to wrap a wire to bind the cable to the support cable. Not before long, I would hear on the news that this was the roll-out of the hybrid fibre-coax network, known as HFC for short, that was to carry pay TV to consumers. This was around the mid-90s, and there was a frantic competition between Telstra and Optus to build the largest HFC network.

Not before long, I remember there were consumer complaints about the large number of unsightly cables in the air and damage to trees which had holes cut through them to let the wires pass through. In the end, I believe it was agreed that Optus and Telstra would not overbuild each others’ coverage footprint. With that, it seemed that HFC technology was to be relegated as a technology for the privileged few. In a way, just like how Galaxy TV (a pay TV microwave broadcaster) was for the privileged few as well.

I never had any pay TV service myself, so things were simple and unchanged. The phone ran from copper lines from Telstra (although resold as Optus). Internet, which came later, was via dial-up access. Nothing complicated.

But when I passed by some of my friend’s houses, I noticed something high up on their wall near the eaves. We didn’t have one of these – it looks a bit like a kick-board from swim school but had cables running in and out of the bottom. I took the time to ask and the answers I got ranged from “I don’t know”, to a less-than-satisfactory answer of “that’s our telephone line”. For years, I just assumed it was an oversized demarcation box, but then I noticed some units had a second square grey box next to it … that could also have been a demarcation box. In which case, why two boxes?

This mystery has been in the back of my mind since I entered primary school, and was not resolved for me until just recently. Just the other week, our local area council had its final scheduled clean-up, which meant one last chance to go salvaging. To my luck, I found one of these units sitting kerbside. It was quite covered in dust, grout, spiderwebs and even came complete with some live beetles inside. I took it home, cleaned it up, and that’s what is pictured above.

Up close we could see the Motorola branding, so it was probably going to be a piece of radio equipment. Aha, HFC is RF based. On the rear we see it’s an SII/CAU 1LN 65MHZ with a model number of MM1012A. This unit appears to be refurbished September 2003, which isn’t that long ago. The unit was secured by security Torx which took a lot of work to remove, which I did, to remove the scraps of cut wire and clean out the wildlife.

With that information alone, I didn’t find too much information on it. I couldn’t even find a picture of it online, even when every third house down some streets around here still have them hanging on their wall. I was still left somewhat in doubt as to what it was – so only once I opened it up did I fully realize what this unit was.

HFC Cable Telephony

After a bit of looking around, I resolved the mystery box to be a Motorola CableComm Series II Cable Access Unit. The product code seems to indicate it is a one-line telephony interface, with an operating frequency of 65Mhz (likely a 6Mhz wide cable channel). But lets take a step back and recognize this for what it is.

In the early 1990s, competition in the telephony market was difficult. Telecom Australia (later Telstra) owned the copper line infrastructure which every reseller used. It was hard to offer more competitive prices when your minimum costs are set by your rival. Optus pinned its hope on HFC to break this deadlock.

Over in the USA, where HFC is the primary telecommunications media in some areas, the desire to perform “triple-play” services over HFC resulted in the production of cable modems of various proprietary protocols. Systems designed to provide a voice port over HFC included Motorola’s CableComm, Tellabs CableSpan 2300, Unisys DCSS, ADC HomeWorx, General Instrument’s Mediaspan, Scientific-Atlanta’s CoAxiom and Arris Cornerstone.

Motorola was a late entrant into this rather crowded market, but using their expertise in trunked radio systems, they announced the CableComm system in 1994. Early trials were named with TCI and Teleport Communications Group in Arlington Heights, Illinois with some tests between the houses of 25 selected employees. Full-scale commercial roll-out was planned for the first quarter of 1996 with a per-line cost of US$350 to US$550. The system is claimed to use TDMA, allowing between 500-1000 calls in a 6Mhz channel.

In November 1996, Coherent Communications Systems provided an echo canceller solution for Motorola CableComm, which appears to operate in the digital domain. This article seems to suggest that aggressive expansion would commence “during the remainder of 1996.” This suggests that things took a little longer to get off the ground than expected.

But Motorola definitely did bet big on it at chipping away the grip that phone companies had on telephony – in the article by Chicago Business, the boxes are described as “looking like paper towel dispensers”. Apparently, Motorola rented the Lake Zurch factory for four years to produce these devices.

In early 1996, Optus chose the CableComm system to provide its competing telephone service. Even in 2002, they proudly claimed to have secured a long term supply of this equipment. Note that they misspelled CableComm in the press release! A search for CableComm online shows that some people were locally employed to keep these units going past their discontinuance by Motorola.

However, as with all technology, it has a lifetime. The time was up for Motorola CableComm even before 2008 had hit, as an expert report from Michael G. Harris of Harris Communications Consulting LLC points out. In this report, Optus is criticised for severe underinvestment in their HFC network with oversubscription. They are also criticised for relying on Motorola CableComm – a discontinued proprietary system for which a multi-dwelling unit compatible telephony adapter was never developed. The older circuit switched nature of the system was also not contemporary with the technology used in the US which had or was in the process of being replaced with packet switched solutions. The report seems to point to Optus’ own lack of confidence in their own network – preferring not to hook up customers where other providers would have done so, and avoiding investment in the belief that HFC is an inferior technology. This apparently stemmed from Telstra’s disagreement with Optus in regards to how customers were not connected to their own networks even when it was available, instead opting to use Telstra’s copper network instead.

In fact, as CableComm itself is a proprietary standard used by their early internet cable modems, it was soon supplanted by the Data Over Cable Service Interface Specification (DOCSIS) industry standard which involved a large number of industry players. As IP-converged technologies became more popular, services were run over the “internet” data connection rather than occupying its own discrete channel spectrum, hence the move from circuit switched to packet switched networking.

In 2011, NBN Co cut a deal with Optus to shutter its HFC network and noted it may use sections of it to run the NBN network. By 2012, Optus had publicly announced it will be turning off its HFC network in 2014 with complete dismantling by 2018. Customers would be migrated to NBN. Unfortunately, and not unexpectedly in light of the previous critiques, the network was not up to scratch, with underinvestment meaning having equipment reaching end-of-life and a rather complicated network architecture. To ensure quality service would have required overbuilding. In the end, it appears they abandoned the cable instead, so the whole network had a lifetime of about 20 years. Despite the NBN choosing a mixed technology mix which should have accelerated the deployment of the NBN and reduced the cost, the use of HFC seems to have caused its own headaches.

Telephony Thoughts

In researching this, I have now come to understand a lot more about HFC telephony. What I discovered was somewhat fascinating to me, since it seems that some customers may not have had cable TV, and the HFC was solely used to provide telephony service to customers.

Unfortunately, as I was not ever served by the CableComm system, I don’t know what it’s characteristics were. Did it really provide carrier-grade service indistinguishable from a POTS line? Did data services work reliably? Even V.90? I have no idea. But it’s important to recognize that in some way, these “early” HFC telephony services were a first step towards competing against traditional telecommunications companies and were the “step” in-between POTS and full-blown packet-switched VoIP (whether through a softphone or ATA) or even VoLTE.

That being said, as cable is a shared medium, security would have been a potential issue. While it is already presumed to be a fully digital codec system, did it utilize encryption? Was there the potential for calls to be “eavesdropped on” by others served by the same node? Was there any encryption on system metadata? Were there actual vulnerabilities in the system which might allow others to place calls on other subscriber’s accounts? In modern SIP VoIP telephony, we made a big step backward, with most calls going out in the clear using G.711/729a. Now that we have a security focus, maybe such old systems would not mass muster anymore.

Even on the RF side, it would be fascinating to know – did it really operate at 65Mhz on the cable x 6Mhz bandwidth? How was the TDMA and return channel achieved? What was the physical layer coding? Was FEC involved? Is it still operating for the last few people that might need it before NBN comes in?

To be honest, circuit switched technology may be outmoded, but it does have a more consistent guaranteed quality as compared to packet switched technology which can be affected by latency, jitter, packet loss and contention which established circuit switched calls do not have to worry about. When it comes to running voice-band modem data services, circuit switched technology generally works better.

Unfortunately, information seems to be scant. It’s probably moot anyway, but it’s always intriguing to know. Unfortunately as I don’t have any equipment to use this unit with, it’s not something which I can determine.

However, one disadvantage is already apparent – as it’s not a traditional copper line, you can’t get ADSL2+ unless you reconnect a traditional copper line. With a “CableComm” provided line, you can only get cable internet at the price Optus charges you. Cable internet competition is pretty low compared to ADSL, thus it can be more expensive as a result. Maybe this was “built in inertia” to stop bundled Optus customers from going back to a copper-line. At least their newer Optus Cable services use modems with integrated VoIP packet-switched service instead.

Teardown

We saw the unit front and back earlier on, so continuing on, we see there is a mounting hole at the top.

At the bottom, there are three cable entry positions, with two spring-loaded security Torx screws securing the covers. The covers can be slid down to reveal the connection terminal area. There is a recessed area next to the “Manufactured by Motorola” text which may have been used by service providers to apply their branding – but I’ve never seen it used around here.

The larger outer cover uncovers the parts which are “customer responsibility” – namely their own internal telephone wiring pair and an F-connector for the cable “loop-out”. This is terminated with a 75-ohm terminator, suggesting this subscriber may not have had a pay TV or cable internet service. The cable entry boot was inverted in the photo after I cleaned it.

A test jack is supplied to allow testing of the line output from the CAU without interrupting the wiring.

The wiring attaches into a plastic block which is snapped down into IDC fingers.

Removing the service provider cover reveals the F-connector for the incoming HFC feeder line. This is where power is derived, up to 100V. A grounding bar is also provided. The connection on the left appears to provide an alternative power connection, a covered diagnostic port and a socket which may allow for a failover service.

The port is similar to a LAN 8p8c connector.

The IDC blocks and test jacks themselves are a module that can be removed. The part number is 0104080X10, with this unit being Revision B, made Week 44 of 1999. Vendor is 912, Made in the USA.

On the inside, the CAU is further identified with its MSN/ESN. Apparently it was Assembled in Singapore. Given the numbers, I wonder if it is somehow related to GSM just running over coax, in which case encryption is part of the standard. Maybe it also runs voice compression?

 

The top cover is surrounded by a rubber gasket and is supposed to stay nice and “dry” as that’s where the brains are. This board showed only very slight signs of corrosion at some points. We’ll take a more detailed look at the board once it’s freed from the base.

The line interface module, however, once removed was found to be potted entirely, and thus not serviceable. Is it just a bunch of wires? Or are there some surge protective elements inside?

In the interests of complete disassembly, I removed the screw holding the can over the radio section of the CAU. Internally, part numbers of 2604550X03/2604550X04 are printed, along with Issue A Vendor 912. Date is coded as 2228 and 1758 respectively. The cans were Made in Mexico.

The top side of the board is where most of the action happens. We will break it down into sections, but overall, the board has a code of 8404629X01 Issue-0. Printed in white is 21399-03-8 with a date of 12th March 1998. It clearly identifies as Made in USA, Motorola CableComm and CAU Series 2. The barcoded label in the centre has the text “MMLN5061B 606VLW 01/11/99 MM1012A” which seems to suggest a factory alignment date of 11th January 1999 along with the product code of MM1012A. The board is a fairly complex six layer PCB. It definitely looks like Motorola quality at a glance.

Starting in the lower-left side, we can see two symmetrical line interfaces. The left side is Line 1 and is populated, with Line 2 unpopulated as this is a single line unit. Lines are protected by fuses on both tip and ring. The line interface is handled by an Ericsson PBL 387 10/1 SLIC. This takes the place of a traditional hybrid transformer. The inductors may be responsible for ring voltage generation. The resultant audio is handled by a Motorola MC14LC5480DW PCM CODEC-Filter. A row of test pads is bought out as well. The fingers at the bottom connect to the potted module to make connections to outside lines.

The right half of the image is the power conditioning portion of the unit consisting of an assortment of solid electrolytic capacitors (good choice), inductors and transformers. A 2A fuse appears to protect the unit. The power is controlled by a TOP224G with feedback via an MOC8106 optoisolator.

The top left corner has a large Motorola 5104545X01 IC, which may be an ASIC, along with a 4.608Mhz crystal. The firmware is stored on a pair of flash chips.

The left one has a Motorola label with 5170806C07 120398 printed on it. Assuming they’re both identical, the chips are AMD AM29F200AB 2Mbit Flash chips, thus the unit contains at most 512kiB of firmware. Complementing this is two Samsung Electronic Corporation KM62256DLGI-7 32k x 8 SRAM chips for a total of 64kiB RAM. A Motorola XC68LC302CPU20B 50Mhz 68k CPU is seen in the top corner. Interestingly, the CPU is pretty beefy – a relative to the 68k CPUs that powered whole Macintosh LC desktop computers. The top corner has some edge contact fingers probably used for factory alignment use.

The radio section has some Motorola custom ICs, a filter, a few capacitors, an adjustable oscillator and a bypass relay (RK1-L2-4.5V-H33). The filter networks are pretty elegant to behold, but way above my ability to design or comprehend.

The underside of the board has fewer components in comparison.

The radio section predominantly features small surface mount components. U203 is an RF MicroDevices RF2317 Linear CATV Amplifier (DC to 3Ghz). Aside from that, the other components are not very notable, with the exception of C441 which appears to be a discoloured tantalum surface mount capacitor. This unit may have suffered some damage at some point and may not be functional (e.g. surge? reverse polarity?).

The remainder is not particularly notable. But at least, now I know what the unit is for, and what’s inside.

Optus’ HFC Network

Prior to all of this, on a previous salvage walk, I decided to look at the HFC cables in my neighbourhood just out of interest. I had an expectation that Optus’ HFC network would not be usable at the time just based on observations of the equipment in the air – but since we’re discussing HFC, I might as well put up some images. I’ll first say that I’m not an expert in HFC technology by any means, nor am I familiar with the equipment used, but I present my observations even if incorrect.

At this time, I’m living in a townhouse complex with something like 28 units. Our HFC feed is this cable that crosses the road. It’s not well bound to its support wire, thus the snaky appearance, but it has just a two-way splitter at the end. No way will it serve the “population” especially if all users are connected – such equipment has to be re-planned/replaced.

The splitter itself is Scientific Atlanta branded. This brand had been acquired by Cisco in 2005. As this the pre-acquisition logo, the equipment is at least 12 years old now. At a good estimation, such equipment has useful lives of about 15 years (roughly speaking), so I suspect equipment replacement is not unexpected.

The nearest trunk amplifier seems to be this unit, with the incoming trunk passing through a Regal combiner with surge protection. It definitely looks dated, but the cabling itself is slightly confusing. It seems there are two trunks coming in on the left, with the combiner used to split the signal so the trunk continues through to the right along with other trunk, and one output from the amplifier making the three cables on the right. But two of the trunk cables have an end-to-end join under shrink-wrap. Maybe it’s the opposite direction, but I’m not familiar enough with the equipment to know.

That being said, the physical condition of the aerial cabling “needs attention” – the fine wrapping wire that secures the lines to the support wire has come free and “unwrapped” along a section.

The support wire is held to the poles with simple hardware – interestingly, in our area, there is support wiring with no HFC cables bound to it. It may have been initially planned to be rolled out, but then due to a lack of potential customers, abandoned.

Not knowing much about the specification of the cables used, I noticed that some of the cables in the area seem to be a darker colour, whereas others are more “grey”. Maybe it’s related to sun exposure, or different batches of cables, but here’s another set of joints with some “cable tie” bundling.

Sometimes, along the road, we will see such boxes and sometimes they even hum loudly to signal their presence. I don’t think this is a HFC node, but more likely a power injector as it has a mains input, and its output goes up the pole into a coupler.

The couplers/splitters in question all seem to be Regal branded in my area. The marked frequency response is 5-1000Mhz, and the model is RDLC10-12S290V.

The splitters in the area also come in some other varieties – in the case of larger splits, it seems that it is a powered splitter, but where the two core power comes from is not obvious to me. It’s not just mains is it?

We can see the varying penetration levels, but rarely are all existing ports taken, as the popularity of pay TV is relatively limited. The second splitter seems to have one port where a former customer’s drop was merely “cut off”. The improperly terminated F-connector port may well result in signal reflections and distortions, or noise ingress into the cable network as well as potential damage to the splitter from having water enter through the stub of cabling, helped by capillary action of the coax braid.

There are un-powered variants as well, which seem more “expected”.

This is an anchor point that’s attached to the support wire to secure the drop to the customer.

As for amplifiers, around our area, there’s the occasional “flat” one that looks like this. From what I can gather, this appears to be a line extension amplifier, to help push the signal further down the line. I’m not sure the customers connected at the end of this would enjoy the best signal-to-noise ratios.

The other amplifiers look as above, with a varying port position configurations and number of cables leading in and out. Just looking at the PDF about related units from Cisco’s website seems to show that they’re pretty complex.

While it does seem somewhat complicated, this is relatively anemic compared to the complexity of the networks I witnessed overseas in South Korea and Taiwan where HFC is one of their core technologies for delivering high speed broadband. I might comment more on this once I get some holiday photos up – but needless to say, HFC outages and service trucks were constantly attending to the network in South Korea, and customers were being migrated to FTTH services, so the lifetime of HFC is limited at best.

Conclusion

At long last, a question I had from childhood has been resolved – namely, what’s that box on the wall? In examining this, I learned a lot about the history of telephony over HFC in terms of the alternative technologies available, the hopes of the companies, and its ultimate fate.

Looking at the Motorola CableComm CAU, I’ve managed to get some good shots of its internals which I couldn’t find online, so that’s a pretty unique opportunity in my eyes. While I’m not able to ascertain the operational status of the unit, or run it, it was still good to know what was used internally.

It also gave me a chance to share some of the photos of the Optus Vision Cable HFC network equipment in my area. In the end, it seems I’m still left waiting for NBN, but it looks like it will be VDSL2 when it arrives … late and underwhelming.

Posted in Electronics, Radio, Salvage | Tagged , , , , | Leave a comment

Tested: “Compatible” Nikon EN-EL14 Batteries vs Genuine

Amongst the published articles on the site, it seems there is a fair amount of interest in the ones dealing with “compatible” and “fake” camera accessories, specifically Nikon EN-EL14 and EN-EL14a batteries.

Having just recently returned from my first extended holiday, and now armed with the B&K Precision Model 8600 DC Electronic Load, I thought it would be a good idea to see just what the capacity of my genuine and compatible surviving EN-EL14/a series batteries were, just to see if it is even worth bringing them with me. After all, luggage allowances are like gold.

Methodology

I rounded up all of my surviving EN-EL14/a series batteries of which there were five. Two of the batteries are genuine batteries as supplied with a Nikon D3200 and D3300 body. The remainder were “compatible” aftermarket batteries. Their characteristics are noted in the table below along with their weights.

Specifically, the compatible EN-EL14a is identical to this prior teardown of a failed unit. The first compatible EN-EL14 is the surviving “brother” of this failed unit. The second EN-EL14 has not yet otherwise featured on the site. Unfortunately, no living samples of this particular variant were available.

The batteries were connected to the tester using a short length of AWG18 wire and a spade crimp which was “compressed” using the crimp tool as to fit within genuine Nikon battery sockets. It was noted that this was not necessary for the counterfeit batteries, which had wider terminals that could accommodate the spade terminal without modification.

The “compatible” battery is on the left, with a genuine battery on the right. The wider terminal shroud seems to be a common factor amongst all compatible batteries tested to date.

Regardless, testing using such spade terminals is not recommended for end users as they risk causing contact fatigue or damage to the plating, as it was not designed for mating with the terminal block. I decided the risk was worth the reward (for me), and have completed the testing this way with no ill effect. However, if you choose to replicate this set-up, you do so at your own risk.

Testing was simplified to a 300mA constant current load, despite the camera load profile being more “peaky” in nature due to the mechanical nature of a DSLR. The obtained results may not perfectly reflect the true “in-service” capacity of the battery as a result, especially if live view or the pop-up flash are used. The loading was chosen as a “safe” value below the (anticipated) 1C rating of the worst battery which should not cause any dangerous situations to arise. Test termination voltage was set to 5V for 2.5V per cell as to reduce the risk of any permanent cell damage.

As the batteries are not obtained new and are actually significantly aged, the results are not representative of fresh batteries. Of note is that failed samples of the compatible batteries have been excluded by the nature of their failure, and thus one cannot infer the reliability of the batteries from the fact that a certain capacity remains after a number of years of use.

Results

If you haven’t read the methodology section, please do so to ensure you are aware of the caveats of the testing involved and significance of the results.

A summary of the results is presented in the table above. Both Nikon genuine batteries achieved just above 90% of their claimed mAh capacity, with above 88% of their claimed energy capacity. As a result, the EN-EL14 achieved 930mAh and the EN-EL14a achieved 1116.9mAh which are fairly good results considering their age of 4.5 and 2 years respectively.

The compatible cells varied somewhat. The compatible EN-EL14a which should boast a greater capacity than the older EN-EL14 reported a result of 709.7mAh which is 57.7% of the claimed capacity.

The first compatible EN-EL14 bested this by offering 837.3mAh which is within 100mAh of the result of the genuine cell. However, it boasts an inflated capacity claim on the unit, as such can only achieve 59.8% of its claimed capacity. In terms of energy capacity, it achieved 59.7% of its inflated energy capacity claim.

The second compatible EN-EL14 only had 220.8mAh capacity in this test, just 21.4% of its claimed mAh capacity. It fared no better in the energy department with 20.2% of its claim. This battery appears to be showing signs of impending failure, which was observed to be gross cell capacity imbalance in previous failures.

Looking at the voltage trends, most of the batteries (with the exception of the final one) show a reasonably traditional Li-Ion discharge characteristic. The results had some noteworthy characteristics. Neither of the genuine Nikon batteries cut-out at 5V, and instead, the test software stopped the testing as the voltage was at 5V during discharge. This implies the cell protection cut-off voltage threshold of genuine cells are likely to be less than 2.5V per cell to maximise capacity and reduce the risk of inadvertent power-down during a card write and may put more onus on the camera software to finish its tasks before battery depletion and prevent user operations to prevent stress to the battery.

The other compatible batteries “dropped out” at voltages above 5V, although the threshold is not constant which implies one of the two cells in the battery have hit their minimum voltage and the battery is “disconnected” to protect the cell from damage. This behaviour could (in the worst case) result in removal of power to the camera in an unexpected way.

The failing battery seems to have a discharge curve that is just linear, which appears unusual but may indicate internal resistance issues along with cell capacity imbalance. This battery is almost worthless to me, and thus, it now participates in a teardown in the name of science.

Teardown

This “failing” specimen is black in colour with laser-etched text on one side and the top near the battery terminal. No other distinguishing features appear on the casing. It is very well sealed, and opening it required snipping at the case with side-cutters.

Inside, double-sided adhesive tape and foam rubber strips are used to position the cells. The cells are unmarked.

On the reverse, white silicone adhesive is used to secure the battery to its shell. As is common with other clones, the mid-point cell connection is provided for cell balancing.

The board proves to be identical to this earlier failed compatible battery, with the same markings but different cells used.

As I was curious as to the state of the batteries, I charged this EN-EL14 to full. This took a normal amount of time (~2hrs) despite its reduced capacity, potentially because the BMS is balancing the cells at the top of charge or due to internal resistance issues. I then repeated the CC 300mA discharge, but addressing each cell individually with a cut-off of 2.5V.

A gross capacity imbalance is seen with one cell registering almost twice the capacity of the other under the same test condition. This suggests that Cell 1 was limiting the battery capacity. But even more significant is the shape of the graph that does not have a traditional sharp “drop” at the end of discharge.

While the discharge terminated at an on-load voltage of 2.5V, after 12 hours unloaded, the voltage rose to 3.6V for Cell 1 and 3.7V for Cell 2. I suspect this is proof that the battery may indeed have more capacity but cannot deliver it at the requested current due to internal resistance.

This is relevant, as one observation I have made is that the compatible cells rarely achieve the same level of performance in live view mode, which is demanding current-wise. Engaging live-view for non-trivial amounts of time quickly shows the low-battery indication more quickly than would be suggested by the capacity figures alone.

Conclusion

Compatible batteries have been a bit of a mixed bag especially recently. They seem to feature inflated ratings at times, but almost all are now built with low quality domestically produced Li-Ion cells which result in gross cell imbalances and battery failures within a relatively short time. The ones that have not failed still do not match the genuine cells nor meet their ratings by any stretch, however, may not be completely worthless.

Unfortunately, this is not a simple pickle to resolve as the cost of compatible batteries is much lower than the genuine batteries and for most people, do not cause any damage and come with only a small level of risk. Unfortunately, if you end up with a bad batch, replacing them time and time again, queueing them for charge and waiting, along with the increase in bulk carried is also a “cost”. To date, while the clones haven’t cost me any photos, they have limited live-view video opportunities and occasionally caused a shutter error to appear (which is resolved by pressing the shutter button again). I suspect this is because of internal resistance issues resulting in a voltage “dip”.

Genuine batteries are not that easy to come by, and can be difficult to differentiate as well. That being said, it seems all compatible EN-EL14/a batteries have wider terminal shroud spacing and cut-off before 5V under load, thus could be simple characteristics to differentiate “counterfeit” batteries from genuine ones.

Regardless, a compatible battery in the process of failure was torn down and examined – which brings the number of different EN-EL14/a batteries torn down to four different types with two families of PCBs.

Posted in Electronics, Photography | Tagged , , , , , , , , , | Leave a comment

Tech Flashback: External IDE HDD – USB 2.0 vs Firewire 400 (IEEE 1394)

When I originally decided to repair the power supply for the Cooler Master Xcraft external enclosure, my first thought was just to do it because of the sheer utility value of having such an enclosure around. But then I realized, this was going to be a good opportunity to revisit one of the small battles from the past – namely USB 2.0 versus Firewire.

A Brief Technical Background of the Interfaces

Universal Serial Bus, or USB for short, was first standardized in 1996 by a consortium of seven companies which involved mostly computer-related companies. The original standard permitted for Full-Speed 12Mbit/s and Low-Speed 1.5Mbit/s transfer rates.

It wasn’t until 2000 that USB 2.0 became available. It was an upgrade to the original USB 1.x series of standards, offering higher data rates of 480Mbit/s (high-speed). This was capable of operating over the original four-conductor cable as used by USB 1.x, and peripherals were backwards and forwards compatible – operating at the maximum speed mutually supported by both host and device. Cables and plugs were mostly standardized, with some manufacturer-specific deviations. However, it gained significant support on both PC and Apple platforms and replaced a number of legacy ports.

USB operates as a strictly master-and-device hierarchy, with all transactions initiated by the bus master, usually the computer. This served to simplify and reduce the cost of peripherals, which may have been part of the reason for its success and ubiquity. USB offers both power and data connections, and is hot pluggable. Several device classes were defined, for easier device configuration. Since then, faster variants of USB 3.0 and 3.1 “Superspeed” have become popular, while maintaining backwards compatibility for the most part.

IEEE 1394, also known as Firewire or i.LINK or Lynx was an alternative interface which offered similar features. However, Firewire development pre-dates USB beginning in the late 1980s, and completed in 1996 primarily by Apple with contributions from semiconductor manufacturers and computer-related companies. Notably, Intel was not one of the parties, and this was frequently attributed as being the reason for its lack of penetration into the market.

Firewire contrasted with USB somewhat, by operating at rates of 100, 200 and 400Mbit/s (marketing rate, with actual physical rates marginally lower) from the outset. Later improvements boosted the rate to 800Mbit/s and higher although using different connectors and signalling and being relatively unpopular and shortlived by comparison.

Unlike USB, Firewire peripherals were more complex owing to the need for any of the devices on the bus to perform as a bus master. This increased the cost of Firewire peripherals (in general). The bus also allowed for various abilities not catered for in USB which has strict master and device relationships (although this changed later with the introduction of USB on-the-go (OTG)). It was common in Firewire buses to utilize daisy chaining of devices in a way not common with USB, and even possible to use Firewire to perform host-to-host connections for use as a “network” connection or to access another computer’s hard disk (e.g. MacOS Target Disk mode).

Firewire proved to be more efficient by supporting direct-memory access (DMA), which unfortunately, also allowed it to be a (potential) channel by which attacks can be launched on target computers simply by plugging in malicious peripherals.

Similar to USB, the original Firewire 400 standard had a number of standardized connectors – a six pin which carries power, and a four pin which carries data only. While it does carry power in a similar way to USB, the voltage was not standardized, and was “nominally” around 25V, although could be much lower depending on the design of the port itself. It is also capable of hot-plug operation, with standardized device classes for ease of device configuration. However, USB touted 127 possible devices per bus, with Firewire topping out at 63. This was rarely an issue in ordinary usage, however.

Despite the superiority of Firewire at its introduction when compared with USB 1.1, it remained relegated to niche roles such as in transporting video and audio (e.g. for MiniDV camcorders). Lack of support by Intel meant that Firewire interfaces were often an “add-on” extra for PC users, and the introduction of USB 2.0 bought contemporary performance, thus reducing the incentive for some users to install such an interface.

The Motivation

One major use of high speed external interfaces is to enable the connection of external storage to a computer. Firewire was initially envisaged as a way to replace the parallel SCSI bus for connection of hard drives, for example. High speed interfaces were at a premium, prior to the advent of USB 2.0 and Firewire 400, thus their existence allowed for much improved external drive performance enabling multimedia and more demanding applications on external devices. As a result, manufacturers of more premium high-end external enclosures and devices began to offer devices with both interfaces to cater for users which may have access to only one or the other (e.g. early iMacs with USB 1.1 and Firewire 400).

Despite the similarities in the advertised physical-layer data rates of 480Mbit/s and 400Mbit/s, users noticed differences in the performance. Where both interfaces were available, it wasn’t as simple as choosing USB 2.0.

To see just how different these interfaces performed, I decided to test it myself with some semi-modern hardware so that we have a good indication of just how different the performance was. As it turns out, my everyday AMD Phenom II x6 1090T BE machine with a Gigabyte 890FXA-UD7 motherboard features both USB 2.0 ports hosted by the chipset and Firewire 400 ports hosted by a Texas Instruments IEEE 1394 controller.

The Enclosure and Drive

The test was conducted with the Cooler Master Xcraft enclosure. It has a flossy metal finish on the external cover.

Vented grilles are provided to allow for the drive to cool, and rubber feet stop the drive from sliding around. A vertical stand is also provided, but not pictured.

The front has a logo badge which serves as a one-touch back-up button (for the software which it was initially bundled with) as well as being an indicator light of power and activity (blue and red respectively).

As this is a more premium unit, we can see a USB B-F socket to interface to the host PC, but also two USB A-F to allow for additional drives to be connected. This indicates the controller board has a USB hub IC on board, however, it is to be noted that USB devices can only be nested down seven hubs but reliability can suffer. In general, many enclosures of the day, even dual-standard ones, did not offer this facility.

There are also two Firewire 6-pin sockets, and both are “equal”. Either one can be used to connect the drive, and the other to daisy chain. Some earlier models may have had three of these ports rather than two, however, rarely drives with just one port were also made. The advantage of this set-up is that no Firewire hubs would be necessary if connecting a number of these units together, as Firewire hubs were rarely encountered and much more expensive than USB hubs. On the downside, there are potential reliability issues especially with long cables and potential for bad-connections mid-bus.

But of importance is the internal chipset that powers it, as different chipsets and firmwares can affect the performance and compatibility somewhat. The PCB of the board shows that the solution is made by SKYCable Co. Ltd. On the top side, there is an SMSC USB2504A-JT USB 2.0 four-port hub IC. There is the firmware flash chip, soldered to the board, a PMC Pm39LV512-70JCE.

The chipset is buried underneath, and is an Initio INIC-1530s. This chipset claims to be IEEE 1394a-2000 compliant with asynchronous transfers up to 400Mbit/s. It supports SBP-2 interface over Firewire for maximum performance, and USB bulk-only transfer. The IDE side supports up to UDMA 100, making it a fairly high performance solution. Next to it is an Agere L-FW802C, which I couldn’t find a datasheet for, but presume is a Firewire PHY interface.

To test the unit, I decided to grab a random IDE drive I had lying around in my room that needed some exploring. The drive in question is a Western Digital Caviar SE (WD2000JB-00GVC0) 200Gb 7200rpm 8MB-cache hard drive. While it wasn’t the fastest, largest, densest drive that I had in my possession (that would be the 320Gb IDE drives, which are still employed in legacy-computer service), it should be still enough for demonstration purposes. As a note, this drive was manufactured 2005, and would have been taken out of service around 2011. It has rested for six years … and it still works with all its data intact.

The Results

Testing was performed with HDTune Pro and ATTO.

HDTune Pro Sequential Read

On the USB 2.0 interface, with no other devices connected to the controller, the chipset managed a steady 31.1MB/s transfer rate. This is considered fairly good for USB 2.0, as the highest results normally sit about 30-32MB/s, with common user rates varying from about 17MB/s to 28MB/s depending on the host controller and devices connected to the bus (as bandwidth is shared).

However, when connected to Firewire, the drive was able to achieve a very solid 40MB/s almost throughout the transfer. Towards the end, the limitations of the drives’ own ability to transfer data was shown in the inner zones of the hard drive. A faster drive would have been able to “fill it in” and achieve an average of about 40MB/s.

As a result, Firewire wins when it comes to sequential read with 40MB/s compared with 31.1MB/s on USB 2.0. So despite the physical layer rate of Firewire being less than USB 2.0, its reduced overheads result in a substantial (28.6%) advantage over USB 2.0.

Of note is that because bus bandwidth is shared, external drive to drive copies are fastest when copying from one bus to another – one drive on Firewire and another on USB 2.0 for example. If you have multiple independent USB controllers (as I had previously chose to configure my machines with), then having one drive on a PCI card and the other on the onboard chipset should see full performance. Hooking them up to the same bus often results in half the performance due to the scarcity of bus bandwidth – note how the hard drive is faster than the interface and is essentially bottlenecked by it.

HDTune Pro Sequential Write

When writing, a virtually identical result is seen for USB 2.0 with an average of 31MB/s transfer rate.

However, Firewire encounters some limitations here with a reported rate of 25MB/s. This is an unexpected finding and may be chipset/firmware related or OS-related due to Windows 7’s handling of caching for “SCSI” devices versus USB devices.

So unexpectedly, when it comes to writes, USB 2.0 wins. But likely, not because it is technically superior, but maybe because of the way Windows 7 handles write caching, command queueing (not applicable to USB BOT) and buffer flushing (even when best performance is selected).

ATTO

ATTO seems to show USB having parity of performance when it comes to reading and writing as implied by HDTune Pro. Best performance is at 64kB accesses and above.

Firewire’s performance discrepancy is reflected in ATTO as well. The performance does not increase as sharply with block size as with USB 2.0, but overall absolute transfer rates for small block accesses are much improved over USB 2.0 showing another advantage. This may be related to command queueing and DMA access.

Conclusion

While Firewire was not as popular as USB 2.0 in its heyday, and Firewire is all but dead now, anyone who has used the two interfaces would probably already have known that Firewire is mostly superior for external hard drives despite its lower “advertised” physical layer rate (of 400Mbit/s vs 480Mbit/s of USB 2.0) due to its more efficient use of bus bandwidth.

The differences were quantified on a modern machine with sequential read transfer rates in favour of Firewire by 28.6%. Sequential writes showed an unexpected disadvantage for Firewire, possibly due to OS or chipset limitations, with USB 2.0 besting Firewire by 23.5%. However, when small block accesses are involved, absolute figures from ATTO show 4kB read accesses on Firewire are 2.6x faster, and writes are 2.4x faster.

Generally speaking, Firewire appeared to be the superior bus for external hard disk enclosure interfacing. Aside from the write anomaly, it “tunnels” SCSI disk commands over the SBP2 interface and certain chipsets could even support multiple LUNs (e.g. a Master and Slave device) on the IDE bridge board, as I had done such a thing with a Prolific PL3507 in the past. Some difficult ATAPI devices seemed to work better as well. Such features are not often supported on USB.

Posted in Computing | Tagged , , , , | 1 Comment