I’m looking into using sd cards for my backups. Easy to transport, easy to store elsewhere, not a challenging write-erase workload. Does anyone have experience with this or have recent measurements? I do see many of the problems mentioned in the article below.
While not quite as tiny, an nvme enclosure with a real drive is likely going to be more reliable, assuming the USB port it’s connected to can actually supply the power it needs.
The enclosures I’m currently fond of are from dockcase, which have a capacitor (PLP meaning power loss protection), so if power is cut, the drive is sent a “oh shit make things safe now” signal and has a few seconds to do so.
That said, expect all drives to fail. Some you get a warning, others you don’t, so have multiple backups. I don’t recommend the cheap vendor branded Samsung SSDs on eBay, had too many of those die with the firmware version reporting as “ERRMOD” (error mode).
Another good thing to note, is how to power cycle SSDs. Outside of Samsung drives, this has worked on a few drives over the years that decided to suddenly stop working: https://dfarq.homeip.net/fix-dead-ssd/
In my experience, Raspberry Pi DIYers are more than happy to talk about their experiences, but they've only got small sample sizes, they're buying things at retail, some will be relatively price-insensitive, and people who have problems might be particularly eager to complain.
These people hate SD cards with a passion.
The kind of people who are in a position to report on failure rates across a fleet of hundreds of SD cards (a) don't always have their employer's permission to talk about it, (b) don't want to publicly bitch about vendors they want to maintain a good relationship with, (c) can be very price sensitive depending on the product and (d) are often buying through special channels where vendors will do things like guarantee a specific controller firmware version.
These people keep on releasing products that use SD cards.
probably not a good idea. Modern, high density SD cards will likely use not very reliable MLC NAND internally, pretty much requiring active maintenance by the software on the built-in controller.
I once had an embedded board booting from MLC NAND that lost its entire u-boot environment after being without power for a year, just sitting on a shelf in my office.
Similarly, at home, I have an SSD with Windows on it for occasionally playing games. It required extensive recovery after sitting in a desk drawer for a little over a year.
I generally wouldn't really trust flash memory for offline backups.
I've heard of this, but there seems to be a lack of practical information. If this is true, then:
* Shouldn't SD cards have an expiration date? What if you buy a card that's been sitting on the shelf for a year?
* What is the proper maintenance procedure? Like should I let it sit in the reader for an a minute, 15 minutes, an hour for the controller do its magic?
* Are the translation tables better protected? Does the card return to full working order when reformatted?
* Are there testing tools for this? I remember CDs had tools that could tell you how much error correction the drive had to do.
Shouldn't SD cards have an expiration date? What if you buy a card that's been sitting on the shelf for a year?
If the flash was blank at manufacturing, it would remain blank after a year as there would be no charge to leak out. The problem is when the controller also stores its firmware on the main NAND, but AFAIK that isn't the case for SD cards (which have very little in the way of firmware compared to SSDs.)
Note that that’s only when the drives have hit their rated program/erase cycles, meaning completely worn out but still functioning. Does not apply to drives with lots of life left in them, which is the vast majority of consumer drives.
From the linked pdf:
>In MLC and SLC, this can be as low as 3 months and best case can be more than 10
years. The retention is highly dependent on temperature and workload.
Interestingly, while heat can increase the rate of information loss (when drive is worn out), it can also decrease wear on the drive by making writes easier and less destructive to flash cells. The controller is still going to hate heat though.
Thanks for the extra references! I'll make sure to bookmark those and take a closer look later.
Yeah, MLC already losing data after 3 months doesn't really surprise me that much. If you play with a raw MLC chip and turn off the ECC engine, the amount of bit flips you get from reading on some chips isn't even funny anymore. With one chip I tried, even with ECC, I managed to induce persistent bit flips fairly quickly just by reading the same physical page over-and-over. For extra fun: those bit flips can happen in another page that you didn't touch, because the bits are paired :-)
What actually still does surprise me sometimes, is how reliable those things can work after all, if you just pile enough software abstraction and management on top.
Are there tools available to a normal person to take a look at this stuff? Eg, is there some way I can evaluate a not yet failing SD card? CDs had tools to check for C1 and C2 errors, is there a way to do the same with a SD card or SSD?
For SSD, I'd imagine you might get SMART data or something similar?
I know that on Linux, there are mmc-utils, but I only used those with eMMC so far.
I just tried to read out some general status information on uSD cards I have around: it didn't work for 2 Kingston and Acon cards (command timeout). But for a Swissbit card (industrial), it took some solid 10 seconds before reporting back that it's doing ok and is ready for data. Looks like it depends heavily on vendor support, possibly also undocumented/non-standard command sequences?
You might be referring to "MLC" in the generic sense of multiple bits per cell, but SD cards are more likely to be TLC or QLC these days (and there may even be some PLC appearing now), 3 or 4 bits per cell rather than the original meaning of MLC for 2-bit MLC.
My experience with old (as in ~2008-2009) 2-bit MLC NAND, with not much wear, has been that they've been OK for over a decade. Ditto for some early SLC USB drives (the biggest being 512MB).
USB flash failed in terms of writes? I'm kind of curious about the story of longterm offline storage, vs occasional use since I think the controller needs to both have logic to actively refresh over months/years and occasionally be on to do so?
I think both work, from a purely information point of view.
The SingleFile download preserves more of the original format. For a long while I was using MarkDownload and capturing the content that way, but a bunch is lost that way.
I also use Zotero for downloading journal articles (etc), that also has the ability to snapshot, but then I found it was locked up in Zotero. Where my current setup is a Jekyll repo on Vercel that means that the content is almost immediately accessible after the github push and deploy. Something that happens automatically after I click the SingleFile download button (configured in the extension).
I need do no more than grab the web link and paste it into Obsidian, where linking to Zotero from Obsidian is a royal pain (not impossible).
I’d love to know what the smallest OS is that can boot on an old MacBook, get to network (wired or wireless), and includes a basic browser (tui or gui) even if it doesn’t drive all the onboard hardware. That feels like it would be an excellent learning platform.
DOS just isn't much of an operating system, it's more like a bunch of helper functions preloaded into RAM, and you're free to do whatever you want with or without them.
I’m very aligned. I don’t love Apple , but I buy their hardware because of their vertical integration and closed systems. I get the argument that many eyes make bugs shallow , but if you want a tightly managed supply chain as a consumer , I don’t think you can do better than Apple in today’s market.
As I’m saying to Stavros above, I think many of these decisions don’t get put to a shareholder vote. The voting mechanism is generally extremely coarse afaik.