Video Codec Round-Up 2023 – Part 17: librav1e (“Fastest & Safest” AV1)

At last, we come to the final codec in this round-up. This is the third software-codec option for AV1, produced by Xiph.Org foundation and marketed as being the “fastest and safest” AV1 encoder, known as librav1e. This codec is definitely not making any claims as to quality and has generally not had a great reputation in that regard. Thus, it would be an interesting test to see where the low-water-mark lies for AV1 encoding.

For this codec, encoding was performed on the Windows platform, using my Intel Core i7-3630QM based system for the majority of speeds, with the exception of some encodes at speed 0 and speed 1 which were performed on an i7-1370P in parallel due to time constraints. To its credit, for many of the speed settings, the encoding speed was not so slow as to rule out encoding on a 3rd-generation Intel CPU. The encoding flags were as follows:

-qp [qp] -speed [speed]

Encoding speeds from 0 to 10 were trialled, with QP values ranging from 0 to 248, as the codec does not support CRF or CQ rate control. Options for row/tile based multithreading were not configured, as it was understood that these options may have a slightly adverse effect on output quality, although as a result the encoding was unbearably slow for higher-bitrate and higher quality outputs as the codec was essentially “single-threaded” and utilising the CPU poorly. Understanding this, many encode processes were dispatched in parallel where possible, subject to RAM constraints, to ensure maximal CPU usage without affecting encoder quality.

Video Quality vs QP at Various Speeds

Let’s see how the QP value influences quality for this codec.

While a QP of 0 generates very large files, it would seem the result is not entirely lossless depending on the speed. However, in general, regardless of speed, PSNR is still fairly close. Below a QP of 96, PSNR seems to increase in a curve exponentially towards lower QP. Above a QP of 96, the PSNR appears to decrease linearly until around 240 when it somewhat plateaus. A PSNR of 45dB is achieved near QP=40, and a PSNR of 40dB at around QP=120.

With regards to SSIM, there seems to be a gap between 1 even at the lowest QPs. All speeds are somewhat similar, with a few of the faster speeds noticeably peeling away at higher QPs. A SSIM of 0.99 is achieved at QP=136, 0.995 at QP=104 and 0.999 at QP=24.

The VMAF scores show some similarity with SSIM scores, showing relatively little difference between encoding speeds especially at lower QPs. A VMAF of 99 is achieved at QP=72 and a VMAF of 99.5 at QP=48.

Best Video Quality vs QP

Looking at only encoding speed “0” gives the following results.

It would seem there is a good scaling of PSNR between best and worst frames for QP=32 and above. Below that, it would seem that much effort is expended to improve worst frames to increase quality.

In SSIM, the trend is very similar to most codecs, although it does seem that worst frame quality takes a bit of a dive around QP=200. Best frame SSIM does peel away from an SSIM of 1 as well, but not by much.

The VMAF result is perhaps more illustrative of codec behaviour, with QP=96 seemingly being a break-point for worst frame results. Below QP=96, worst frames tend to be rather stable above 90 and improving only slowly. Above QP=96, the worst frame quality dives linearly to QP=176, where a slightly slower trend takes over at a VMAF of 40.

Based on these results, I would say that the “sweet” QP range is around 32 to 96.

Bitrate vs QP at Various Speeds

Bitrate-wise, as a QP=0 encode was performed, the bitrate chart is a bit out-of-proportion, showing the nearly exponential rise in bitrate required for this. Nevertheless, there is subtle but small differences between different encode speeds, with faster speeds taking more bitrate (i.e. are less bitrate efficient).

A zoomed-in version of the above graph provides more detail, showing a bump at QP=96, as the codec seems to be doing something different above this, as evidenced by the VMAF results.

Video Quality vs Bitrate at Various Speeds + Cross-Codec

The fun part is to compare across codecs.

On the PSNR metric, it would seem that librav1e lags far behind its contemporary libaom-av1 and libsvtav1. It seems much closer to libx265 when it comes to best compression efficiency, edging it out slightly. At its fastest, it is closer to the worse H.265 hardware encoders and only slightly ahead of libx264, making it a little bit disappointing considering it is an AV1 encoder.

Looking at SSIM, its best seems similarly matched with libx265, with the fastest speeds matching av1_amf, which in itself, is not a particularly good AV1 hardware encoder. The other presets lie somewhere in-between, trading blows with various hardware AV1 and H.265 encoders.

Perhaps VMAF is our best judge, and it seems to place librav1e behind libx265 which is a little surprising given how close they were in the two graphs above. It does have a cross-over with av1_nvenc, which performs better at higher bitrates than librav1e. The fastest setting is a close match to hevc_amf in this metric as well.

Image Comparison

The proof is in the images below.

  • Frame 844
    • QP=32 starts to make a change to the gradient background, smoothing out much of the grain. Above QP=80, sharp edge artifacts seem to appear. Hair detail beginning to be smoothed at QP=48~64.
  • Frame 1347
    • Background and fine foreground hair detail smudged at QP=48~64. Foreground hair takes a slightly splotchy appearance at QP=40~48.
  • Frame 3733
    • Significant softening of moving hair detail at QP=32. Not a very sharp result all-round.
  • Frame 4415
    • Fine hair details noticeably softened at QP=32, altered by QP=40~48. Crown hair takes a detail-less appearance at QP=32~40.

From this, I would say that QP=24~40 might be a good constraint for decent encodes.

Conclusion

Based on these results, it would seem that librav1e certainly doesn’t deliver headline-grabbing AV1 compression efficiencies. This is not surprising as it seems to prioritise speed and safety (being coded in Rust), reaching more of an H.265 (HEVC) level of compression efficiency with an AV1 bitstream. Whether this is actually useful in real terms is debatable – perhaps sticking with an older codec that provides the same level of quality is a better idea purely due to lower computational demands for decoding and broader compatibility even if the encoding might take a little longer.

Nevertheless, I would say that a sensible range of QPs is 32 to 96 as the codec seems to use a lot more bitrate exponentially below 32, and the worst-frame results fall-off linearly past 96. But based on a visual analysis, good quality seems to be maintained in the QP=24 (25.4Mbit/s) ~ 40 (14.4Mbit/s) range.

While all the encoding and testing has now been done, I hope to wrap this up with a conclusion posting as well, hopefully featuring some nicer, less-cluttered graphs with more range. Stay tuned!

Encode Stats (for this Part)
Total Number of Encodings: 352
Total Size: 142,923,819,986 bytes

Encode Stats (for all Parts)
Total Number of Encodings: 7,630
Total Size: 2,431,756,061,769 bytes

This post is part of a series. See the main index at the bottom of Part 0.

Continue reading

Posted in Computing | Tagged , , , , , | Leave a comment

Project: Generate High-Quality Industrial Fire/Alarm Sounder Audio with Python

I can’t believe how fast 2024 is going by … Australia Day long weekend has already come and gone. Unfortunately for me, I spent it being rather sick with a flu-like illness which I’m still recovering from … but that didn’t stop me from undertaking this rather odd project befitting my rather odd interests.

Motivation

I was browsing through the catalog of element14 on the lookout for specials and things that might be helpful or interesting in my electronics hobby. Often this means browsing through their Connect Magazine, where I have encountered the “Roshni LP Sounder” several times before.

In fact, it’s on some discount right now (note the “01” suffix in the order code), although I will note it was cheaper in the past. This shape is rather recognisable – I’ve seen it in some of my workplace buildings before!

By now you’ve probably realised – that’s a fire sounder and you’d be absolutely correct. This is the thing that makes those noises that warn you to prepare to evacuate and (should all else fail) to actually evacuate the building. In Australia, it’s customary to describe this in WH&S documentation as “beep-beep” and “slow-whoop” which I find somewhat inadequate, although I have heard rather muffled recordings in some interactive multimedia induction courses as well.

But wait, there’s more! While we usually don’t get to hear those two sounds short of being involved in a fire drill or an actual fire, this unit actually has a reportoire of 32-tones. Now I could just buy this unit for $45.40 today, but that’s not inexpensive for a hobby interest. I wouldn’t really have a genuine use for it beyond occasionally scaring the heck out of my neighbours and deafening myself over time with those sweet tones.

Sounds are powerful and sometimes, even beautiful. Functional sounds like these are rarely celebrated, nor are they really noticed. They become part of our landscape, for our “occasional” hearing. But these sounds were likely designed with purpose – to cut through the noise, to be clear, salient, and to provoke a sense of urgency and unease. They were designed to save lives, perhaps draw attention to something, or in some cases, to repel vandals.

I wanted to hear the sounds that these sounders (and others) can make, in as good of a fidelity as I could, preferably without needing to buy them. Having lived in Australia most of my life, I can really only say that I’ve heard less than a handful of them. The units also have a long heritage with rather simple-to-generate tone sequences that have a distinct, harsher, hard-keyed character that seems lost in some of the newer “voice” sounders with MP3 playback capability. There are also legacy tones which are no longer part of current standards which may be worth hearing again. I decided to do some research and call upon good ol’ math and Python to make it a reality.

Research

Sounders are used in a variety of applications including fire safety, burglar alarms and industrial process audio alerting roles. The tones used in fire safety are often governed by national and international standards which are numerous and rarely freely accessible.

Rather than pursue that route, I decided to go with looking at what the sounder manufacturers had to offer. The market appears to be dominated by Eaton, of which Fulleon is a subsidiary, with sub-brands such as Asserta, Roshni, Squashni and X10; and Klaxon which has sub-brands of Sonos (not the Wi-Fi speaker company) and Nexus. I did discover a few other companies making sounders, but most of them are “voice” sounders (i.e. pure audio) and not tone-based.

Thankfully, it turns out the datasheets and installers manuals for a range of these sounders provide some specifications about the tones they can emit – here are the most useful tables I found:

Looking closer, it seems the Asserta’s 42-tone library is fully covered by the X10’s 102-tone library. The 32-tone libraries of the Roshni and Sonos are mostly, but not all, covered by the X10 and similarly, the 60-tone Nexus also has some differences.

Some of these differences appear to be “genuine” while others could be down to poor notation. For example, does x/y Hz at 1Hz mean 500ms of each or 1000ms of each? Is the assumption of 50/50 duty-cycle valid? What about phase – which tone comes first? The Asserta datasheet seems to go against the others especially regarding which tone comes first in its documentation. I suspect this is down to “synchronisation” – the tone which comes first needs to be standardised to ensure all sounders go off in the same pattern at the same time to prevent confusion, but the way it is noted in the datasheet makes it somewhat ambiguous as to which duration is for which tone, so I decided to avoid using it entirely.

It is also noted that sometimes there appears to be duplicated tones. This is because of the way some of the older sounders are configured – namely, they are set for their primary tone and a “2nd stage” tone is tied to it. This way, you would configure the Australian pre-evacuate tone (for example) as the primary tone and when the 2nd-stage input is triggered, the matching Australian evacuate tone would play instead. But because the 2nd-stage tones were not independently configurable, choosing the first tone chose the pair, hence the need for duplication. Later units offered some independent selections and some modern voice sounders even go up to 5-stages independent based on MP3 files!

Before I go forward, I will acknowledge that I did discover Eaton has their own tone library page which has video samples of their sounder’s outputs, but I wasn’t too pleased with it. The samples were short, sometimes they weren’t very clear, occasionally they had glitches and the audio compression really didn’t make me feel like I was “in the room”. I guess I wanted my “lo-fi” tones done “hi-fi”.

Coding Methodology

I decided to make this a weekend project – no nice code, no comments, just pure hackery.

To make it quick and simple, I decided to use Python 3 without anything fancy – no numpy because I probably didn’t need it. I decided to use the inbuilt wave library to write .wav files, math because I’ll be doing a lot of sin and 2π the old-fashioned way, struct to mash the data into the right endianness/length and time just because I wanted to know how long the program took.

I started with simple functions for single tones, then another for interrupted tones. As more complex functions started to grow, I realised some functions were supersets of others, so I converted those functions to call those superset functions with the necessary parameters to generate equivalent output. It wasn’t until sweeps that I had to do some real head-scratching, working out how to make a constant-phase chirp. Then came interrupted-sweeps, bidirectional sweeps and T3-tone timing.

Once I had the generation side working, I had to code-in all the tone tables and then make some compromises. These included:

  • If it wasn’t a tone specified with a set of frequencies and times, I wasn’t going to generate it. Sorry no “simulated bell” and “hooter”.
  • If it had x/y Hz alternating at z Hz, then it is assumed a 50/50 duty cycle is taken with x as the leading tone and where a cycle involves the length of time for both tones.
  • If it had x-y Hz chirp at z ms, then it is assumed the time represents the time for the full chirp. In the case of a bidirectional chirp, then this is the time for just the up-sweep or just the down-sweep with a full cycle taking 2*z ms.
  • Phase continuity at change points is not guaranteed. This can cause subtle “clicks” in the output audio – but it does add the harsh character that I do like.
  • There is no envelope to tones coming on and off – they’re either there or they aren’t. This can make transitions particularly harsh – again, something I do like as it makes it clean and snappy.
  • There is no distortion that is introduced by the sounder mechanism and transducer, which is going to be slightly less buzzy or distorted than in real settings. There are also no environmental effects. Perhaps some effect filters can add this in afterwards, but I like my results “mathematically correct” for now.
  • Generation is based on sin rather than cos which means that the 4kHz tone will render as pure silence at a sample rate of 8kHz as every sample is taken at the zero-crossing. This is not a bug as the Nyquist rate is the absolute minimum sample rate, providing ideal conditions (i.e. in this case, with regards to phase alignment, is completely out of its favour).
  • The code is written for 16-bit Mono WAV only at this time, although sample rate can be changed.

One thing I did notice was my initial code was very slow. Turns out all those examples about using wave.writeframes on single samples was a bad idea as it is spending a lot of time doing conversions, header updates and more. I eventually followed this tip to build up the data in memory first as a byte-string and then pass it through in one fell swoop, which increased execution speed by 18x. But the code provided didn’t actually work because Python seems more picky now about types, so the null string to which .join is called on needs to have its type specified as ‘b’ for byte.

The code is listed in the appendix at the end of the blog and can be downloaded for your own tinkering and generation of tones. As promised, it is ugly but … it did something! Please use responsibly – these tones may cause unnecessary alarm if reproduced in the wrong scenario, especially in public!

Result

In the end, the code runs and managed to generate all the WAV files for the listed tone tables, with 45s duration each, at 12000Hz Mono 16-bit resolution in a total of 88s on my laptop. That’s quite a bit quicker than I originally expected.

The output tones are nice and clean, with sharp transitions between on and off, just how I like it. Why can’t everything be nice and clean like this?

Unique Audio Tone Table

Provided below are 45-second, 12kHz, 16-bit WAV files of each of the unique tones out of all the programmed tone tables. Where tones were common to multiple tables, only one copy is listed below.

Disclaimer: These files are generated based on my interpretation of the datasheets of these sounder products and are not guaranteed to be correct or compliant to any standards which are named. These files are not endorsed by any manufacturer nor have they been cross-checked with an actual sounder product. The names of any manufacturers and products are only indicated for cross-reference with their tone tables. These files should never be used in lieu of a sounder and are presented only for research interests.

Do not use these files irresponsibly – for example, to cause confusion or panic by playing in public venues. These files are strictly for private enjoyment only!

Conclusion

It may have been a few hours here and there from my flu-riddled weekend, but now I have a script which cooks me up a nice set of WAV files containing tones from the market’s leading sounders. In exchange, I get to hear in crisp detail what the sound of each tone setting would be, assuming I have interpreted the data in the datasheets correctly (and assuming it was notated properly in the first place). Now I have travelled the world (in terms of sounder tones) without having left my desk (nor paid to buy any units).

It is a good reminder as to the basics of DSP and tone generation, but also how to work with WAV files in Python. It was nice to see the speed-up with working in memory and making up a byte-string instead of calling a write for each sample. Not having documented the code, I did a lot of mental gymnastics with the variables and per-sample counting, so the code is a mess but I still managed to get there in the end. Perhaps doing this made my flu headache worse …

Anyway, I hope you enjoyed the tones … perhaps it’s something you never thought you needed.

Bonus – Things I’ve Been Quietly Working On …

If you’ve gotten this far through the post, thanks for sticking around! For those waiting for the next installment of the Video Codec Round-Up, don’t worry – the wait will soon be over, just as soon as my librav1e samples finish encoding (any day now).

In the meantime, I’ve been rushing to deliver my entries into the Experimenting with Flyback Transformers Design Challenge over at element14. This time, it’s not going to be an award-winning performance owing to the time constraints, but if you’re so inclined, you can read the blogs here:

I’ve also posted about some prizes that I won from the previous Experimenting with Supercapacitors Design Challenge, including the laptop that’s been helping out on my Video Codec Round-Up and a surprisingly accurate Multicomp Pro MP710259 DC Electronic Load. The latter is actually a rebadged Korad KEL103 which is known for being very economically priced and through the misfortune of one being damaged in shipping, I’ve been able to tear it apart for inspection and convert it into the fortune of having two working units.

Once this is all done … I may finally have a chance to deliver on the long-promised Rohde & Schwarz MXO4 Oscilloscope review which has been very much delayed. Hopefully this happens before I leave for my long overseas break!

Continue reading

Posted in Audio, Computing, Electronics | Tagged , , , , | Leave a comment

Video Codec Round-Up 2023 – Part 16: libsvtav1 (Scalable Video Technology for AV1)

While I’m now out of hardware AV1 encoders I can test, I still have some software encoders worthy of my round-up. In fact, from the outset, I had high hopes for libsvtav1.

The SVT-AV1 codec began life with Intel in partnership with Netflix, and was adopted by the Alliance of Open Media Software Implementation Working Group in August 2020 to carry on the group’s mission. The name, “scalable video technology”, suggests the capability of the codec to adapt across many different use case scenarios from live broadcast to high-quality transcode. Accordingly, it is reportedly faster than libaom-av1, although this was not tested in this study. In the ffmpeg implementation, it adopts CRF encoding by default.

The encoder was passed the following settings:

-crf [crf] -preset [preset]

I decided to stick only with providing a CRF value between 12 and 63 (which I believe to be a sane range) and a preset (or speed) value between 0 and 13, spanning the complete range of presets. Encodes were performed on a mixture of platforms including Windows on an Intel Core i7-8650U and i7-1370P (for preset 0 and 1) and on Linux on the AMD EPYC 7551 courtesy of Oracle Cloud’s Free Tier offering of a pair of x86-64 VMs which I provisioned with swap memory to perform the bulk of the faster encodes (preset 2 and greater). The resulting encoded files were tagged as [email protected] profile.

Video Quality vs CRF at Various Presets

Let’s see how encoding at the fourteen different presets affects video quality.

Based on PSNR, a big spread in quality metrics at a given CRF is observed depending on the preset. Some presets cluster together somewhat at one end or another, but it would seem that Presets 0-4 stay fairly close together on the high quality side and are most advisable for transcoding. Significant drawbacks on PSNR can be seen for faster presets.

A PSNR of 45dB is achieved at CRF=22.5, and a PSNR of 40dB is achieved at CRF=45 under the best quality preset of 0.

A similar story is seen with SSIM. Peak SSIM is somewhat limited, perhaps because of only encoding down to CRF=12. An SSIM of 0.99 is achieved at CRF=49, 0.995 at CRF=38.5 and 0.999 at CRF=12.5.

The VMAF results are consistent with the above. A VMAF of 99 is achieved at CRF=33, 99.5 at CRF=24.5. This suggests perhaps CRF=12 to CRF=25 is a good range to use for most applications.

Best Video Quality vs CRF

Now, looking only at the results for the best preset.

It appears that PSNR scales with CRF quite nicely, both best and worst frames showing a nearly linear degree of change with CRF.

The SSIM metric also shows a normal degradation with increasing CRF, with the max SSIM also degrading slightly at higher CRFs.

The worst frame VMAF scores hit a saturation point below CRF=30 where only small improvements are had. Above CRF=30, worst frame results degrade rather linearly. Best frame results are pegged at 100 while the average tails off in a consistent manner with other codecs. This suggests that keeping encodes under CRF=30 is preferable.

Bitrate vs CRF at Various Presets

The bitrate can be seen to vary as a function of the preset chosen, sometimes quite markedly, although it is not always the case that a faster (higher-numbered) preset produces a larger file. Combined with the previous results, it seems that choosing a different preset will not only change the quality at a given CRF but also impact the bitrate at a given CRF for a “double-whammy”. This would be best illustrated in the following graphs which use bitrate as the comparison metric.

Video Quality vs Bitrate at Various Presets + Cross-Codec

The important part is to see how the compression efficiency compares with other codecs on a bitrate basis.

True to its word, the “scalable” part of libsvtav1 is on display. Presets of 6 or less offer clear benefits over libx265 which is good, with preset 0 being very close to libaom-av1 with only the slimmest of margins separating the two on PSNR. Faster presets tend to perform somewhat better at low bitrates but poorly on higher bitrates, with the fastest seemingly reaching libxvid levels of compression efficiency at high bitrates. While I did not measure it – this usually indicates that the codec is very much tunable across a wide range of speeds to the point where it may make sense to use AV1 for “everything” rather than to pick a codec based on a particular use case.

On the SSIM benchmark, a similar result is shown with clear advantages for Preset 6 or slower, with 0 reaching very-close-to libaom-av1 results to even slightly edging it out at the higher end. Faster presets show worse performance, however, on SSIM it would seem that Preset 13 is more h264_amf level of performance at high bitrates rather than libvpx or libxvid, but excels at low bitrates where it edges out hevc_amf.

Using VMAF as the metric, while also considering Netflix’s involvement in both SVT-AV1 and VMAF, it would seem a very similar result is reached. It still seems to agree that libaom-av1 is slightly better in the best encoding preset, by a hairline margin, and that Preset 6 or less is really needed for a convincing benefit over libx265. The fastest preset acts a bit like libxvid at higher bitrates, while achieving hevc_amf to hevc_qsv ballpark performance at low bitrates.

While its best performance is a hair away from libaom-av1 performance, the scalable nature and improved performance of this codec makes it much more usable in practical scenarios. Part of this lost performance may be due to the downsides of tile/row-based multithreading, although I am not completely sure this is the case.

Image Comparison

Time to look at some images to see the results visually.

  • Frame 844
    • At CRF=12, most of the background gradient has lost its pattern detail, likely due to denoising, however, some blockiness from the source near the top right is preserved. Hair detail is being lost CRF=20~28. Overall, well behaved with no excessive ringing around sharp edges. Very watchable through to CRF=20.
  • Frame 1347
    • Hair in background noticeably softened at CRF=14, lost by CRF=28. Fine foreground hair detail is a bit softened at CRF=12, but main details only start to get a little smoothed and patchy by CRF=20.
  • Frame 3733
    • Hair detail noticeably smoothed and less sharp by CRF=14~16, but because of high motion, is less noticeable in playback.
  • Frame 4415
    • Fine hair detail seems less sharp at CRF=18, to the point of being changed entirely by CRF=28~36. Hair crown highlight detail is somewhat altered even at CRF=12, likely due to de-noising, starting to look artificial at around CRF=18~20.

It is hard to draw a firm conclusion as the results do show some losses, but any artifacts are not visually distracting. Direct comparison with libaom-av1 seems neck-and-neck to me, with some back and forth likely because encoder mode decision for a given frame may mean we are comparing a B-frame of one with an I-frame of another, which wouldn’t necessarily make a fair comparison. That being said, the fact it is this close is a good result overall and is very similar to libaom-av1 in seeming to suggest CRF=14~16 may be a good range for more critical uses and up to CRF=20 for more ordinary uses.

Conclusion

The libsvtav1 encoder seems to be the preferred AV1 software encoder, going forward. Its scalability is definitely on show with 14 different presets on offer. It seems presets 0-6 offer compression efficiency advantages over libx265 with preset 0 offering performance very similar to libaom-av1, albeit only a hair behind, likely trading off for faster encoding performance. Because of the breadth of its encoding speed/performance tradeoffs, it seems that it has the potential to make AV1 suitable for all use cases, which is quite exciting. The use of faster presets seem to offer less compression efficiency, especially at higher bitrates, but lower bitrates remained relatively less affected which is advantageous given the use of video compression usually being applied in bitrate constrained scenarios.

Nevertheless, it seems from the visual that a similar CRF selection to libaom-av1 is suitable, with CRF=14~16 (13.5-15.7Mbit/s) for more critical uses (although denoising is still intrusive regarding fine grain) while up to CRF=20 (10.5Mbit/s) could be used for more ordinary applications. The numerical metrics suggest even CRF=25 (8.6Mbit/s) could be acceptable, but I guess that depends on how discerning the viewer is. I did not test values of CRF <12, but these may be more applicable where very high quality encodes are required.

There’s still one more codec to test and this one is unlikely to be anything too special – but let’s see in the next part what librav1e has to offer. I’ve heard it’s not pretty … but because of its rather slow (by default) encoding which is not multi-threaded, it’ll have to wait at least another week before the results are available. Stay tuned!

Encode Stats (for this Part)
Total Number of Encodings: 1,442
Total Size: 255,518,853,670 bytes

Encode Stats (for all Parts up to now)
Total Number of Encodings: 7,278
Total Size: 2,288,832,241,783 bytes

This post is part of a series. See the main index at the bottom of Part 0.

Continue reading

Posted in Computing | Tagged , , , , , | 1 Comment