PSA: SSDs with YMTC Flash Prone to Failure? Check Your SSDs!

This is a bit of an out-of-cycle update as the bargain-hunting season starts to ramp up and flash memory prices seem to have bottomed out, people may be on the lookout for SSDs. I am certainly no stranger to a good deal, but it seems that my optimism about SSDs is all but lost due to some recent experiences.

TL;DR – If you have an SSD built using YMTC flash memory, you should probably check its data integrity immediately and keep an eye on it. Many low-end models appear to be released with a mixture of different parts – the only way to know if you have YMTC is either to examine the chips or know the chipset and retrieve the Chip IDs using the appropriate diagnostic tool.

Background

I’ve been an SSD user since the early 2010s, starting off with an OCZ Vertex 3 – a Sandforce-based SATA SSD reputed for its unreliability and firmware bugs. However, that SSD is still with me today and still boots my Phenom II x6 1090T test rig, clocking over 28,000 hours. A second drive I purchased for a friend is also still alive too.

Since then, almost 100 units of SSD have passed my hands, and sporadic monitoring of them suggests that I have accrued a total of around six bad blocks across the fleet up until last year with zero data loss. In the same span of time, I’ve had to decommission about ten hard disks from a smaller pool due to signs of failure, but overall, with very minimal data loss. I definitely enjoyed the reliability of SSDs, regardless of manufacturer.

The tables turned last year, when Samsung put egg on my face with their 870 EVO, for which I had to apologise for recommending as four drives failed within the same week and data loss was incurred. It seems others suffered issues which were rumoured to be either firmware related or due to poor quality flash. Even a leader in the market was not above failures – perhaps due to the pressure of COVID-19 related chip shortages or the rapidly falling price of NAND putting pressure on yields. The unimpeachable SSD was beginning to falter.

The race to the bottom with regards to pricing has resulted in some options reaching close to AU$50/TB, a price that one could only dream of in years past. My first SSD, by comparison, was at around AU$2300/TB. But it seems something had to give …

Patriot Burst Elite with Amnesia …

I reviewed the Patriot Burst Elite 1.92TB SATA SSD back in March this year. After the review, I left the H2testW test patterns on the drive and promptly forgot about it, left in the case of the test computer until recently when I needed a “2TB-class” SSD for a project.

I expected to be able to shove the drive into a USB enclosure, wipe the drive and get going … but it had other plans.

It instead was busy, all the time, only providing short bursts of data on access. The data was only written a little over six months ago!

The Illness

Attempting a read doesn’t get far – 7.1MB/s initially and soon erroring out. Was this just an issue with a few sectors? Maybe an incompatibility with the enclosure?

According to the SMART data, it had no reported uncorrectable errors and it was not reallocating sectors either. It seemed healthy and happy.

I promptly restored the drive into the test computer, which had a dead CMOS battery and somehow “lost” the ability to run SATAIII on the chipset, reverting to SATAII rates on all ports. Nevertheless …

… no dice. The drive had issues. It would start off returning data until it didn’t, dropping off the bus entirely.

A little hot-plugging and it was back online … so let’s just try to overwrite the data on the drive. Usually a drive that can’t read might accept writes happily and that would “restore” it to usability … but no. It was slow initially, just 4.4MB/s until it fell off the bus again.

Now, it had a tendency of not appearing properly at all, being “not ready”.

After a few attempts, even its ID stopped appearing correctly. The drive is very sick and yet, its SMART data seemed to claim otherwise, matching the above but with now two uncorrectable read errors rather than zero.

Salvaging the Pieces

Unfortunately, the drive was disassembled in the process of the review, voiding the warranty, as I had naively assumed that having passed the barrage of commissioning tests, it would behave like all my other SSDs which have been extremely reliable. Instead, this will become a failure that I will have to “eat”.

The next logical step for me was to try and recover it by secure erase. Booting into partedmagic, getting the drive to detect was not easy and required a few hot-plug attempts.

Once detected, I tried committing the secure erase command …

… but with all the warnings of bad things that could happen …

… it somehow failed. Such issues can sometimes end up bricking a drive completely. Trying again was no good, so I reluctantly power cycled the drive and tried again. This time, I succeeded!

Making it an Experiment

Now that the drive is secure erased, will it write and read as normal?

The answer, surprisingly, is yes. The SATAII rates is because of a motherboard issue, but the drive seems workable again.

The SMART statistics didn’t change much either except for 0xF3/0xF5 – noting that there’s been quite a few unexpected power losses due to my desperate hot plugging/unplugging. The spare block count remains the same, so in spite of the whole ordeal, the flash was still deemed “fine” by the controller.

H2testW shows data integrity is just fine as well when using the USB 3.0 enclosure that I initially was using.

Also via the enclosure, the read speeds were pretty much as expected, with the unsightly dip, as the drive was pretty much packed to the brim.

Perhaps I got lucky that the data loss had not been severe enough to render the drive’s internal metadata and firmware corrupted, otherwise I wouldn’t even be able to get this far. That being said, this is not a device to be trusted … so it’s going to be a bit of an experiment in seeing how quickly data is lost instead.

Giving it a Week

From what I know about flash memory, JEDEC standards stipulate an end-of-life data retention time of two years at room temperature. Many of my SSDs in older machines don’t get a boot-up for a year here and there, yet I have yet to lose any data. So surely, one week is not catastrophic, right?

I came back in a week and ran the throughput test on the drive –

Initial signs were not good as access was slow and seemed to occur in bursts with periods of long “waits”.

The head of the drive seems to fare poorly, but the block between 768GB and 1728GB seems to fare much better. None of the drive reaches the speeds that it did when freshly written.

This suggests to me that maybe either the flash itself is leaking charge and corrupting the data badly, with error correction consuming time and retries to correct the data (which has not been entirely lost, yet). Or perhaps there’s an issue with the controller and its firmware settings for this flash memory that causes poor performance as the data “ages”. Nevertheless, to see such a visible effect after one-week of unpowered storage is not a good sign.

While Patriot has never been my “go-to” brand for much, their products usually aren’t terrible. But this one probably takes the cake.

Fanxiang S501 Falls Off The Bus Forever!

So, what might be the cause? If it’s bad flash, well, I have another YMTC product right here in the form of the Fanxiang S501. That one is only four and a half months old and is in-service in my desktop which gets powered up at least on a twice-weekly basis. Surely, that’s working just fine … so I decide to check.

Uhh. What the …

No. The Fanxiang is also sick. It was reporting errors all over the place and running slow. Before running the test, the SMART data looked just fine.

As the test was running, I saw the Available Spare attribute count down from 100 … to 89. This is a sign that blocks are being reallocated.

After a little more, it reached 70. The threshold is 10, so this was not supposed to be a “critical” situation just yet … but one second later …

… we’ve lost the patient. It fell off the bus, never to be seen again. I couldn’t get it detected regardless of using a USB enclosure or various PCIe slots – it just won’t come up. The only obvious jumper pads put the unit into a download mode where the red LED blinks but no action on the PCIe bus.

This was frustrating, because this drive was one of my scratch drives where work-in-progress is stored. The majority of it, save for a couple of gigabytes, was restored from backup to a Kingston NV2 1TB for the time being. But the other data is well and truly lost for now, making this a remarkably bad investment when considering time and effort.

This also brings to light another issue regarding data recovery from SSDs – the act of reading can be destructive in a sense … so when issues are spotted, perhaps it’s too late to salvage your data. But perhaps more often than not, the failure is “sudden death”, so this point may be moot.

Thankfully, after a week of repeatedly contacting the seller, they agreed to provide me a full refund which I’ve put towards another Chinese-made SSD (from Lexar) that hopefully will have a bit more longevity. If not, then I’ll chalk this up to yet another experiment …

Conclusion

It was my sincere hope that with YMTC’s entry into the flash market and their qualification in SSD devices in the China domestic market and beyond, that they would offer the market an even lower cost alternative that would serve to drive competition in the flash space. Apple were considering them as suppliers at one stage, only nixed due to political issues.

As my track record with YMTC flash is now two failures from two drives, both from different vendors and utilising different controllers, I am now inclined to distrust YMTC-based devices. While I know full well that two samples is not a large number in the grand scheme of things and I could just be doubly unlucky, hitting the jackpot twice. The probability of this is also arguably slim. Or perhaps the manufacturers were taking liberties with the grade of flash memory they chose to equip their SSDs with.

As a result, erring on the side of caution, I do not recommend YMTC-based SSDs anymore regardless of the low price and would encourage anyone with YMTC-based devices to keep an eye on the data integrity especially if left unpowered for some time. Depending on the controller, having it powered regularly is no guarantee either, as controllers may not “scrub” the data automatically and re-write data that is faulty or “weak” depending on their firmware.

While the data loss from having 3TB of SSD storage going offline only amounts to a couple of gigabytes, the time and effort this cost me was not worth the savings. However, as someone interested in technology, I still feel that it was a bit of an interesting journey. Nevertheless, it seems the Patriot Burst Elite will now become a bit of a retention time experiment – I’ll check in on it on a weekly basis to see just how much it struggles …

But for everyone else – check your YMTC drives.
You might be in for a nasty surprise.

Epilogue

That being said, this wake-up call caused me to do an audit of all my SSDs within arms reach to check for data integrity – so far, I have found zero unreadable sectors, although some minor slowdowns on some models, amongst:

  • OCZ Vertex 120GB
  • 2 x Fujitsu 120GB (Memorite rebadge)
  • Intel 730-series 240GB
  • 2 x Kingston SSDNow V300 120GB
  • 2 x Kingston SSDNow V300 240GB
  • Kingston A400 960GB
  • Mushkin Source 2 1TB
  • Kioxia Exceria 960GB
  • 2 x Kingston NV2 1TB
  • Samsung 840 PRO 256GB
  • Samsung 850 EVO 1TB
  • Samsung 970 EVO 1TB
  • SKHynix PC401 256GB
  • Transcend MTS400 256GB
  • Kingmax SME35 240GB
  • Transcend SSD340 256GB
  • Crucial M500 240GB
  • Netac N530S 120GB
  • WD SN730 1TB
  • PNY CS3030 2TB
  • 2 x VAVA Portable SSD 1TB
  • 2 x Samsung T7 Portable SSD 2TB
  • Lexar Professional Workflow 512GB

That’s a total of 29 drives and zero problems, which is what I was expecting. There may still be a few hiding around that I haven’t audited. I have a feeling that the older flash memory is just more reliable by being more conservative (higher endurance, larger lithography) and the race to the bottom in terms of pricing may be letting sub-par products onto the market which are “good enough” on paper, but not in practice. It reminds me of the days of optical discs where, as CD/DVD burning was popularised, low-cost media flooded the market with terrible longevity.

About lui_gough

I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more about me!
This entry was posted in Computing, Flash Memory, Opinion and tagged , , , , , , , . Bookmark the permalink.

41 Responses to PSA: SSDs with YMTC Flash Prone to Failure? Check Your SSDs!

  1. Leo says:

    Interesting.
    I wonder what one can do to mitigate potential data loss in occasionally used flash drives or SSDs?

    Would simply powering them up yearly be enough? Read every LBA of the drive once a year? Or do we have to re-write every cell once in a while?

    • lui_gough says:

      I suspect this is highly dependent on the controller and firmware, but perhaps a rewrite is safest even if it does cost program-erase cycles.

      The Fanxiang was certainly powered up regularly but that didn’t help. Ironically, as it was struggling, the act of reading it was the nail in the coffin for that drive.

      I believe some older Samsung 840 EVO drives were doing periodic rewrites to guard against loss and slowdown, as it was widely reported that stale data got slow …

      – Gough

      • rasz_pl says:

        Just powering up is enough if firmware isnt totally braindead. It will scan the drive and rewrite it on its own. Drive detects deterioration and tries to recover. This is what you saw with those very low reading speeds and pauses – drive was busy trying to rewrite itself struggling to divide resources.

        I seem to remember commenting about this drive, and checking original post from march thats affirmative. Doesnt make me happy to be right 🙁
        SSD technology is unsuitable for long term storage. There is still nothing on the market that would beat magnetic (tape and drives). Personally I keep stuff on old decommissioned from storage pods Hitachi 7K3000/7K4000. Slow, unsexy, cheap and reliable.

        • lui_gough says:

          I would hope so, but many drives do prioritise low power consumption for portable applications and rewrites cost cycles, so I don’t think it is universal to assume drives will scrub the data and rewrite it in the background.

          That being said, let’s see what the Patriot does next week after this readout. If it was rewriting, then my expectation would be that a full read every week would essentially stop it from losing data. If not, then I’d expect speeds to continue degrading.

          Flash is not ideal for long term, but 2-10 years should still be achieved and be sufficient for many applications. I am still backed by magnetic for bulk archive data mostly, as optical is too slow, unreliable and fiddly.

          – Gough

        • Cherry Crumb Cake says:

          I’ve had Intel X25-e’s that have retained data for years, but at 50nm slc, it is the most robust that was ever built, will never be seen again.
          The only cheap drives I have right now are the Lexar NS100, I hope they aren’t afflicted.
          Recently the Fanxiang 660 was on sale for nothing, turns out I was right to be hesitant.

          • lui_gough says:

            Interesting – yes, I do have Intel X25-M 160GB still in service with my brother’s computer that’s still working fine. My oldest OCZ Vertex 3 is built using Intel Flash … also doing well.

            = Gough

  2. Kerry Lourash says:

    Especially with SSDs so cheap now, it doesn’t make sense to buy any except for brand names like Samsung or Intel. And if a drive gives any trouble at all, replace it ASAP.

  3. TheDragonFire961 says:

    It does seem that the 512 GiB NAND era has been quite troublesome overall, with WD also having some horrific failure rates ostensibly correctable with a firmware update alone. Doesn’t fill me with confidence with any use of SSDs for actual data storage (as an OS drive with backups is fine, of course); whereas the 2 TB/platter CMR era of hard drives seems to be a much better pick for me overall.

  4. Valbai says:

    Quote: “Thankfully, after a week of repeatedly contacting the seller, they agreed to provide me a full refund which I’ve put towards another Chinese-made SSD (from Lexar) that hopefully will have a bit more longevity.”

    Well, considering that Lexar (Longsys) has also started to pack those junk YMTC NANDs to their SSDs recently – bold move! 😉

    • lui_gough says:

      Oh no … well, it’s already shipped. Let me see how I fare in the NAND lottery. Perhaps it will not be YMTC by some miracle …

      – Gough

      • Cherry Crumb Cake says:

        What utility reveals this information?

        • lui_gough says:

          Depends on the controller – you should use jm_id for Maxio.

          – Gough

          • Cherry Crumb Cake says:

            Ah ok, I think I have lucked out.

            Drive: 1(ATA)
            Model: Lexar SSD NS100 2TB
            Fw : SN11035
            Size : 1953514 MB [2048.4 GB]
            Firmware id string[2D0]: MKSSD_200012000110350122,Mar 16 2022,17:02:52,MA1102,EC##C#5C
            Project id string[280] : r:/1102_B47R_DW2.1_2LUN_LP_ALL
            Controller : MAS1102
            NAND string : MT29F4T08EULEE
            NAND MaxPE cycles : 1500
            Ch0CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die

            Drive: 2(ATA)
            Model: Lexar SSD NS100 2TB
            Fw : SN11035
            Size : 1953514 MB [2048.4 GB]
            Firmware id string[2D0]: MKSSD_200012000110350122,Mar 16 2022,17:02:52,MA1102,EC##C#5C
            Project id string[280] : r:/1102_B47R_DW2.1_2LUN_LP_ALL
            Controller : MAS1102
            NAND string : MT29F4T08EULEE
            NAND MaxPE cycles : 1500
            Ch0CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die

            Drive: 3(ATA)
            Model: Lexar SSD NS100 2TB
            Fw : SN11035
            Size : 1953514 MB [2048.4 GB]
            Firmware id string[2D0]: MKSSD_200012000110350122,Mar 16 2022,17:02:52,MA1102,EC##C#5C
            Project id string[280] : r:/1102_B47R_DW2.1_2LUN_LP_ALL
            Controller : MAS1102
            NAND string : MT29F4T08EULEE
            NAND MaxPE cycles : 1500
            Ch0CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE0: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE1: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE2: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE3: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE4: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE5: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE6: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch0CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch1CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch4CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die
            Ch5CE7: 0x2c,0xd3,0x89,0x32,0xea,0x30,0x0 – Micron 176L(B47R) TLC 1024Gb/CE 512Gb/die

          • lui_gough says:

            Indeed, it seems you have :). That’s good news to hear.

            – Gough

          • Cherry Crumb Cake says:

            Bought a 4th, also micron, so good chance no variable bom.

  5. Pavel A. says:

    Wow! I had two of these SSDs, bought in December of 2022, deployed in January or February. By May, I was running into issues, they would sometimes just drop off from my PC (always on machine). After yet another reboot one of them went unresponsive, just like yours did. After dealing with support, just as they provided me the RMA number, I managed to bring it back to life.
    I carried on using them, cautiously now, until I started noticing intermittent performance “glitches” from both drives. It’s hard to explain, but basically they would enter these “states” where one was super slow at writing while the other would randomly go to 100% activity while slowly playing a movie. It was nuts.
    I presented support with my findings and sent them in for RMA. They didn’t say a word after I sent them, and today in the mail I got 2 brand new drives (same model, one tiny difference on the front). After some research I found your follow-up article and now I’m scared to even open or use the new drives.
    I did check the old ones and they were the same as yours, maxio controller and YMTV NAND. And you know this is extra crummy because in reviews of this drive from 2021, it used to use Micron NAND and a Silicon Motion controller…

    • lui_gough says:

      Oh dear. It seems it is not an isolated case … thanks for sharing your experiences!

      – Gough

      • There is a slight difference, by the way, with the new drives. I still haven’t unpacked even one (though I’m super curious to see if anything is different), but on the front of them next to the RoHS mark there is now also a ‘UKCA’ mark, so it looks like they’re planning to sick these drives on the unsuspecting UK people haha.
        Should I unpack one? The likelihood of them changing something internal already seems slim…I’ve messaged support to inquire if the new drives are any safer to deploy than the ones they just took back, and whether they’ll still be covered by warranty, so I was gonna wait to hear back. But I’m so curious especially after seeing the new certification…

        • lui_gough says:

          Hmm. Well, I’d say that’s up to you to decide. Would you be able to get a refund? Or perhaps will you end up selling them off in some way (despite the moral knowledge that it may endanger others)?

          But if you do open them, then I’d suggest checking out the full SMART response and running various SSD tools on it to see if you can identify the flash memory inside, and quarantining the drives for a while to ascertain their short term retention behaviour before considering it for longer-term use. Or if you have someone with an X-ray machine, that could probably give some hints …

          – Gough

          • This was Patriot’s response. Not detailed at all and giving some baseless and expected assurance that the replacements will be better:

            “There is some NAND fail of your defect one, we believe the new one should not have this issue.Warranty period will keep as original one, count on purcahsed date Dec.30.2022.”

            I’m thinking I’ll open one up, see what’s inside from the flash Id tool, and then either keep that one or both as guinea pigs for a while – filled up and periodically observed.

          • lui_gough says:

            That’s a rather generic response, but I hope they are true to their word. Best of luck!

            – Gough

          • Alright, so I finally popped the new Patriot Burst Elite 1.92’s out of their packaging and unfortunately it’s still the “new” design – same YMTC NAND, same Maxio controller, same controller firmware. However, there ARE SOME differences…
            1. The body is different, I didn’t take detailed photos of my drives before I returned them and I’m not 100% sure, but this feels different. It’s certainly different from the drive you had – it’s a metal body with only one visible screw. I’m actually not certain how it’s meant to be opened…
            2. Unlike your drive and both of mine, this drive has a different firmware. As reported in jm_id.exe at the top, both you and I had “Fw :SN12429”, however both of these new drives show “Fw : H220916a”.
            3. Unlike your drive and my old ones, there are now 10 lines of populated NAND information instead of 9. Here is the full jm_id.exe log so you can see what I mean: https://pastebin.com/eKGMmN8E
            4. They have removed the line “From smart : [HIKVISION] []” presumably to hide their tracks?

          • lui_gough says:

            That is interesting to hear, but also not entirely encouraging. Hopefully there are differences in batches of NAND?

            – Gough

    • Cherry Crumb Cake says:

      Just mirror the drives, either through windows storage spaces or freefilesync/Stablebit/Snapraid and use them for unimportant data to test it out.

      HTWingNut just did a data retention test on his SSDs.
      https://www.youtube.com/watch?v=igJK5YDb73w

  6. Lucien says:

    Earlier this year there was a recall of YMTC’s 128L NAND due to reliability / data retention issues (newer 232L isn’t affected.)

    Source (in Chinese): https://www.chiphell.com/thread-2508631-1-1.html

    Both your cases have the 128L generation, so seems likely this is the cause.

    • lui_gough says:

      Thanks for this excellent piece of information. I was not aware of this at all … considering how it managed to make its way to market, that’s pretty disappointing.

      – Gough

    • The discussion you linked talks about their 128L TLC, not QLC which is what’s in the discussed drive(s). Still, there could be a connection.

      • Lucien says:

        The Fanxiang is TLC, plus I would imagine the manufacturing process for QLC is the same just with more bits per cell.

      • Random Guy says:

        The phenomenon of the Fanxiang drive is exactly same as described in the leaked issue report though: read disturb on “cold” data and if it was bad enough it kills the entire drive.

  7. Clemens Eisserer says:

    Your observations is what I dislike about SSDs. While HDDs health can often be deduced from SMART values, SSDs have bit-rot, and while the data might be un-readable for months, the SMART values say all is good.

    I have seen quite a few SSDs die, but to be honest most if not all of them were OCZ SSDs:
    – OCZ ??? (SF12xx drive with 25nm IMFT flash, like the Agility2 but a bit cheaper)
    – OCZ Vertex 2 (received as replacement for the drive above)
    – OCZ Agility 3 (at work)
    – OCZ Petrol (Marvel Controller with lowest-quality hynix flash – basically forgot data after a few days although new)
    – OCZ Octane – would hang on read, re-write made it work again

    the only OCZ drive I own(ed) which never caused a single issue was a 128gb vertex 4.

    • lui_gough says:

      Indeed – SMART doesn’t tell the whole truth, but also the act of reading one out could end up killing it too. Failures also tend to be sudden based on other people’s experiences. Unfortunately, some may only find out the hard way after leaving a low-quality USB-stick or SSD in a drawer for a couple of years …

      That being said, yes, OCZ’s reputation was somewhat deserved based on what I’ve observed from others. I definitely do not dispute your experiences – there was one friend of mine that also had a Vertex 2 60GB failure that was sudden and complete. But they were on the leading-edge at the time, pushing the envelope for price with an enthusiast target market and some of their products were fixed with firmware updates (the Vertex 3 I had is an example, along with being somewhat lucky).

      I hope that the trend of newer SSDs being less reliable is merely my bad luck or an anomaly. But this calls to mind how best to test an SSD for its longer term reliability. I normally run commissioning tests which I turn into reviews as part of my “acceptance testing”, but it is clearly not adequate with regards to longer term stability. I wonder if having a month-long retention test and comparing read speeds would be a valid approach to inferring whether a given unit may be vulnerable to such losses of data, at least initially.

      – Gough

  8. Alright, so it wasn’t even 10 days since I put my new Patriot Burst Elite 1.92TB drives in (the ones Patriot provided in RMA) and filled them up. They’re back to their old tricks, exactly the same RANDOM unusable write speeds. One day I test them and both drives are fine, the next day I test them and they’re both glitched, next day one is fine and one is a bit slow but after 10 minutes it’s also fine….insane.

    I contacted support again and showed them video recordings showing this behavior and they of course latched onto the fact that the drives are ~90% full which is NONSENSE. I then provided them with screenshots from the following day (today) showing how even while this full they can still sometimes behave totally normally (as they should), so it’s not a consistent behavior. I honestly don’t know anymore what is the deal with these drives….The new firmware on them doesn’t seem to have changed much.

  9. BramVW says:

    Here’s another victim of Patriot Burst Elite 1.92TB reporting in…

    I have 2 of these and I wanted to add some observations:
    For me this “behavior” always happens immediately after (or during) writing like 50 to 100 GB data to the drive, and it really looks like there’s a threshold of x amount of bytes or write operations it has to reach before this “behavior” starts to occur.

    I can definitely tell when it’s happening by the SOUND the SSD is making and the heat it’s producing: it sounds like the high-pitched intermittent / repetitive seeking sound of an old mechanical hard drive (eg. when duplicating data), and it gets much hotter than when it’s working “normally”.

    For me it has never triggered the issue when only reading data off the SSD’s.
    But, when the “behavior” is already happening then reading / accessing the drive is sometimes interrupted indeed.
    Moreover, the time it takes to “settle down” also depends on the amount of GB that I had written to the drive.
    eg.: This “behavior” lasted for 5 hours after it completed writing 1TB to the SSD.

    Lastly, when you power down the drive when the “behavior” is happening, it simply resumes immediately when connecting it again…

    All this makes me strongly believe the issue is caused by some form of Wear Leveling processes that the controller is starting periodically which is probably not properly throttled and also started while there’s I/O activity and not paused even during heavy I/O usage, probably without any thermal throttling either.

    According to my tests, (manually forcing) TRIM does not appear to have any effect to trigger or delay this “behavior”.

    I expect this could be fully resolved with a firmware update. What do you think?

    • lui_gough says:

      This is ordinary behaviour for most low-end DRAM-less SSDs – some of them will slow down to 10-30MB/s when filled or approaching filled because of this. They have a pSLC cache for high-speed writing but once exhausted, this cache has to be migrated to the slower TLC/QLC region, causing increased work and slower speeds. This work will continue until the cache is cleaned, restoring performance until the cache is filled again. Of course, if the drive is overheating, then it will slow down a bit (thermal throttling). A full drive, however, may not have any space to spare and remain slow until a secure erase or “TRIM” discards enough blocks for the controller to have room to shuffle data around.

      The sound you hear is likely singing capacitors or coil whine from switching converters due to the change in power consumption patterns (i.e. the current waveform envelope). This is not unusual.

      This is not something that is necessarily resolvable with a firmware update – all DRAM-less SSDs do this to a greater or lesser extent, and it depends on the flash memory it is paired with, firmware optimisations, overprovisioning and amount of disk space actually used. If you want proper performance for heavy writes, you should buy an SSD with DRAM and preferably TLC flash, while keeping them cool – of course, they will cost more.

      That is unconnected with the failure that I observed of complete data loss and drive disconnections on attempts to read/recover the data.

      – Gough

  10. caramb says:

    Hi there,

    Greetings from France.
    Quick contribution to tell you Silicon Power (SPCC) A55 2TB suffers from the same illness.
    When new or empty, the drive operates as expected.
    After some days/weeks of operations and being half-full, the drive starts to not respond to ATA commands.
    It shows as 100% busy while not being able to serve any I/O.
    The Linux kernel of the box it is inserted in reports exactly the same messages : trying to reset the SATA bus, etc.

    Performing a whole disk discard (trim) using the blkdiscard usually manage to revive the disk. However it enters the same deadly cycle again and again ; making it a completely unreliable storage (loss of data).
    (luckily, I was using it for backup storage only ; thus not hurting too much).
    (as of now, I dropped it and put an entry level BX500 instead ; and trying to get feedback from the Silicon Power support).

    Mine is usually the same firmware version : SN12429.

    This is probably a good way to identity disks prone to this catastrophic failure.
    Unfortunately, one cannot check the firmware before buying the disk ; worse, it is a known fact SSD manufacturers silently rework the disk internals (changing controller, memory chip, etc.).

    It is very unfortunate because I’m a long time user of Silicon Power SSDs (7 A55 1TB and a few NVMes) and none of them failed ; I always considered this brand was offering a good value for the money…

    • lui_gough says:

      Thanks for the information – indeed, it is a shame to hear that you have also suffered from this, through a different brand. The fact the firmware version is the same is a very interesting finding – perhaps the similarities also extend to the PCB/OEM and that Silicon Power is merely rebadging these. That being said, the SSD market is very much a mix of different BOMs for the same model making it hard to predict what you are getting, especially for some models which have been on the market for many years. There is little stability in what makes up a particular model or SKU, which is unfortunate for buyers and perhaps a deliberate choice by manufacturers to profiteer off previously good reviews.

      – Gough

      • caramb says:

        Hi Gough,

        A few additionnal info as I took my A55 out of my Linux.
        I put it in a windows box so I can use the jm_id utility to gather details.

        Here are the results :
        v0.23a
        Drive: 1(USB)
        Model: SPCC Solid State Disk
        Fw : SN12429
        Size : 1953514 MB [2048.4 GB]
        From smart : [HIKVISION] []
        IOCtl: Unlk failed 0x0!
        Firmware id string[2D0]: MKSSD_100006000124290121,Aug 31 2022,23:09:39,MA1102,ES8SC#4H
        Project id string[280] : r:/01_FW/02_Proj/YMTC/YMTC-X2-6070-Branch
        Controller : MAS1102
        NAND string : YMN0WQC1B1AC6C
        NAND MaxPE cycles : 1000
        Ch0CE0: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch1CE0: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch4CE0: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch1CE1: 0x0,0x0,0x0,0x0,0x0,0x0,0x0 –
        Ch0CE1: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch5CE1: 0x0,0x0,0x0,0x0,0x0,0x0,0x0 –
        Ch0CE2: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch1CE2: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch4CE2: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch1CE3: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die
        Ch0CE3: 0x0,0x0,0x0,0x0,0x0,0x0,0x0 –
        Ch5CE3: 0x9b,0xd5,0x58,0x8d,0x20,0x0,0x0 – YMTC 3dv3-128L QLC 16k 1.33Tb/CE 1.33Tb/die

        So we learn :
        – The Silicon Power A55 uses a Maxio MAS1102 controller.
        – The PCB and/or the reference design is from Hikvision.
        – The NAND chips are 3D 128 layers QLC ones from YMTC.

        This sounds completely reasonnable to assume this drive is affected by the YMTC chip defect that was pointed in an earlier comment (thank you Lucien).

        Even if the advisory is about nand model X2-9060 and jm_id reports X2-6070, I think it is either purely cosmectic and the chips are the same ; or the defect affects various models of chip (except model numers, both 9060 and 6070 feature the same nand overall specs.).
        The advisory states the defect affects X2 chips but not X3 familly of chips.

      • caramb says:

        Hummm…

        Just found your original review of the Patriot ; and you probably figured the jm_id output is exactly the same.
        Patriot Burst Elite 1.92TB and Silicon Power A55 2TB are thus rebadged Hikvision SSDs.
        And of course they behace the same…

  11. Ratsherr from Germany says:

    Dear Gough,

    hearty greetings from ol’ Germany.
    Your details helped me out, especially your hint to the software.

    I downloaded the utility and I’m very happy to have this version
    of the patriot burst elite 480 GB, and I see here are 1,500 cycles P/E
    instead of 1,000 cycles.

    Model: Patriot Burst Elite 480GB
    Fw : H221215a
    Size : 457862 MB [480.1 GB]
    Firmware id string[2D0]: MKSSD_Dec 1 2022,12:17:51,MA1102,ECNRC#2H
    Project id string[280] : r:/01_FW/02_Proj/N38B/1102_N38B_Branch
    Controller : MAS1102
    NAND string : PF29F01T2ALCQK1
    NAND MaxPE cycles : 1500
    Ch0CE0: 0x89,0xd3,0xac,0x32,0xc2,0x4,0x0 – Intel 144L(N38B) QLC 1024Gb/CE 1024Gb/die
    Ch1CE0: 0x89,0xd3,0xac,0x32,0xc2,0x4,0x0 – Intel 144L(N38B) QLC 1024Gb/CE 1024Gb/die
    Ch4CE0: 0x89,0xd3,0xac,0x32,0xc2,0x4,0x0 – Intel 144L(N38B) QLC 1024Gb/CE 1024Gb/die
    Ch5CE0: 0x89,0xd3,0xac,0x32,0xc2,0x4,0x0 – Intel 144L(N38B) QLC 1024Gb/CE 1024Gb/die

Error: Comment is Missing!