SSD/hybrid drive lifespan estimator

 

Overview

This estimator finds an MTTF (Mean Time To Failure) by taking the estimate of how much data can possibly pass through an SSD or hybrid drive before bad blocks begin to appear, and calculates the amount of time the drive is estimated to last based on current OS data usage statistics I have gathered. the number it gives you is the Estimated Time To Data Failure, which I will call ETTDF.

The estimate is theoretical. in the real world, we all know things happen that we don't want. I based this information on averaged real-world data, drive specs, and OS overhead statistics. but note that on my machine, virtual memory is turned OFF which prevents a lot of superfluous disk writes and reads. probably up to 7 years max avg use for hybrid due to historical drive failure data.

the information you enter here runs only on your browser on your machine, no data is transmitted to a server, unless you choose to make a bookmarkable URL with your data in it.

This calculator can calculate the estimated wear leveling for SSD FLASH CACHE on Hybrid Drives, and regular SSD's. you can put in parameters for MLC,SLC,TLC flash. if you see a new flash technology not listed here and I have not caught it, notify me so I can update this page.

I am assuming the hybrid drives use flash for write as well as read cache. I could be wrong. not much data is publshed about the drives.

the data I have here does not take into account ANY Virtual Memory or swap space. if you are low on memory, these numbers generated by this calculator surely won't work for you - you should upgrade your RAM to fix that.

to find the estimated MTTF should take the minimum of:

  • real-world time something lasts based on customer usage (real-world MTTF)
  • mfr's MTTF given by the product specs, both flash and hard drive subassembly portions, separately from web sites.
  • estimated MTTF (ETTDF)

getting the calculations right has been a grueling task. and I have gotten it wrong many times. I still wonder about these very high values.

caveat: MTTF numbers over 200 years@24/7/365

If you see large numbers over 200 years@24/7/365 (that's the hours they assume), I would not estimate it to work too far beyond 10-20 years of use due to any electrolytic capacitors which degrade. in actuality though, 2794 years is the MTTF for 1:M hours for 8 hours/day including saturdays. I don't think in the real world things are going to last quite that long. if you ever decide to wipe the ssd, that reduces its lifespan by 1/30001/1600 in the case of MLC approximately or or 1/800 in the case of TLC (Toggle flash).

my friend has told me that he has heard of SSD failures, this turns out to be a specific model that is apparently well known.

the number of years for metal migration (electromigration) takes is a calculation based in a number of factors that are hard for average joe to obtain. I tried to see if I could get some estimates for chip electromigration MTTF, and I saw numbers like 170-308 years (based on circuit design), one company said their ssd's has 1,000,000hr (114.079yrs) MTTF for MLC and 1,500,000hr(171.119yrs) MTTF for SLC, however, 600,000hr MTTF was quoted in the same place for hard disks, which I have found only last 5 years@56hrs/wk (without GRC's SpinRite) on average before crashing or having a growing bad blocks problem. the actual chips may last 600K hours, it's just that the data on the hard disk platter has growing bad sectors on it, or there may be other physical defects that somehow passed through QA. caveat emptor?. if you are looking at an SSD, look at its MTTF or MTBF in hours and divide by (365.24224*24) or 8765.81376 and this will give you the number of years MTTF (Mean Time To Failure).

it looks however, like in most cases if something works for 3 years, it's going to keep working. but you shouldn't put your trust in the MTTF, but rather in the Lord, because he's in charge of the real world.

 

SSD Lifespan Estimator

this estimator is based on the following assumtions:

  • the entire \windows\ directory doesn't matter because that's a read, and reads are not a problem with flash. writes are a problem.
  • only total writes/day metrics are needed in this estimator.
  • the max_num_write_bytes = block_size * num_P_E_cycles_per_block * ssd_size_in_bytes block_size .
-->set-->
-->set-->

-->set-->
(Example: 1,000LBA or 1,000blocks or 1,000B (bytes))
(Example: 4381hours or 182days)



-->set-->

Enable Days Per Week (turning off defaults to 24/7)
days Per WorkWeek on Computer: (integer or 1day or 2days or 5.1days)
hours Per WeekDay on Computer: (integer or 8hrs or 8hours)
hours per Saturday On Computer: (integer or 8hrs or 8hours)
hours per Sunday On Computer: (integer or 8hrs or 8hours)

Filesystem Overhead, see table below for proper values.

Results:

0 days 00:00:00.000 or 0 days 00:00:00.000
worth of life before data errors start to occur and the device needs to be replaced.
to fix these problems for a while until drive completely fails, run spinrite level 1 every year.

0 days 00:00:00.000 or 0 days 00:00:00.000
worth of drive life according to drive MTTF spec including hours per week ratio you specified.

Final Estimated MTTF (minimum of above 2):
0 days 00:00:00.000 or 0 days 00:00:00.000

estimated maximum write bytes per drive:
write bytes left per drive:
ratio of days per week:
tot disk read bytes/day:
tot disk write bytes/day:


Hybrid and Dual(SSD+HD) Drive Lifespan Estimator

this estimator is based on the following assumtions:

  • probably up to 7 years max avg use for hybrid due to standard historical hard drive failure data.
  • the entire \windows\ directory needs to be loaded at boot - (this is most likely actually a percentage, but I have no way to actually find out, because perfcounters don't give you that info I don't think. the numbers I got were shockingly low, and I wasn't sure I believed it. for instance, windows 7 takes up 20GiB, a percentage is only loaded, and the perfcounters gave me 630K read bytes for the whole day! I find that hard to believe. maybe I am wrong, but I wonder.
  • the whole \windows\ directory needs to be loaded at boot - again I have no way to get OS internals to find out how much data is actually being loaded without SATA and IDE test equipment. so I have to assume 100% of the files of the windows dir at boot. I still think it's less than that in the real world. for instance, not everyone enables faxing, so that set of exes and dlls may not be loaded. etc.
  • all the read data goes through the flash. (20GiB/boot)
  • all the write data goes through the flash. (3MB or so/day)
  • for Seagate Adaptive Memory Technology, we are going to assume that loading the OS would be considered a long stream and would be redirected to the hard disk and not the SSD. so boots are essentially removed from teh equation. this is not the case with the other cache types.
  • total writes/day, total reads/day, and \windows\ directory size need to be accounted for, depending on cache type.

-->set-->
Warranty:
-->set-->
-->set-->
-->set-->
-->set-->

(Example: 1,000LBA or 1,000blocks or 1,000B (bytes))

(Example: 1,000LBA or 1,000blocks or 1,000B (bytes))
(Example: 4381hours or 182days)



-->set-->

Enable Days Per Week (turning off defaults to 24/7)
days Per WorkWeek On Computer: (integer or 1day or 2days or 5.1days)
hours Per WeekDay On Computer: (integer or 8hrs or 8hours)
hours per Saturday On Computer: (integer or 8hrs or 8hours)
hours per Sunday On Computer: (integer or 8hrs or 8hours)


Results:

0 days 00:00:00.000 or 0 days 00:00:00.000
worth of life before data errors start to occur and the device needs to be replaced.
to fix these problems for a while until drive completely fails, run spinrite level 1 every year.

0 days 00:00:00.000 or 0 days 00:00:00.000
worth of Flash component/subassembly life according to drive MTTF spec including hours per week ratio you specified.

0 days 00:00:00.000 or 0 days 00:00:00.000
worth of Hard Drive component/subassembly life according to drive MTTF spec including hours per week ratio you specified.

0 days 00:00:00.000 or 0 days 00:00:00.000
worth of Hard Drive component/subassembly life according to real life drive failure (about 5-7 years).

Final Estimated MTTF (minimum of above 3):
0 days 00:00:00.000 or 0 days 00:00:00.000

ratio of days per week:
estimated maximum write bytes per drive:
estimated transfer bytes left per drive:
tot disk read bytes/day:
tot disk write bytes/day:

Status

I am think I got my calculations right, made some corrections 9/1/2013,9/10/2013. the drive specs will say how much data you can write, seems to match what I give here, so I got that much right (simple multiplication).

I need data! see here for how to make linux stats - on osx iotop?

list being refined...

  • vista:total read/write bytes/day logged for 24 hours for hard disk as main drive using perfmon
  • vista:total read/write bytes/day logged for 24 hours for ssd as main drive
  • 8.1:total read/write bytes/day logged for 24 hours for hard disk as main drive using perfmon
  • 8.1:total read/write bytes/day logged for 24 hours for for ssd as main drive
  • 8.1:size of installation after fresh install or very minorly used install
  • ubuntu:total read/write bytes/day logged for 24 hours for hard disk as main drive
  • ubuntu:total read/write bytes/day logged for 24 hours for ssd as main drive
  • mint:total read/write bytes/day logged for 24 hours for hard disk as main drive
  • mint:total read/write bytes/day logged for 24 hours for ssd as main drive
  • fedora:total read/write bytes/day logged for 24 hours for hard disk as main drive
  • fedora:total read/write bytes/day logged for 24 hours for ssd as main drive
  • opensuse:total read/write bytes/day logged for 24 hours for hard disk as main drive
  • opensuse:total read/write bytes/day logged for 24 hours for ssd as main drive
  • osx:total read/write bytes/day logged for 24 hours for hard disk as main drive
  • osx:total read/write bytes/day logged for 24 hours for ssd as main drive

98:don't bother - although speed-wise it's outperforms them all, it absolutely hammers the same blocks and ruins the SSD, so don't bother.

apparently, win2k with no background tasks is the OS winner for speed.

for windows vista, windows 7, read/write ratio for SSD's, and various forms of linux shown at the bottom.

the calculations are finished, and are working and I have SOME sets of data.

os disk space (sizes)

you should only think about hybrid drives for 2000, XP or Vista.

  • for windows, this is the total size of
    • "%windir%"
    • "%programfiles%"
    • "%programfiles(x86)%"
    • "%appdata%" (don't do this on windows 8, it's infinitely recursive)
    • "%userprofile%"
    . open my computer, c:, make sure folders button is depressed, right click on WINDOWS folder and pick properties and see how much space it takes up (use the larger number). multiply this number by its IEC equivalent. why these dirs? because any added files by the user are outside the scope of windows.
  • for linux flavors (ubuntu, opensuse, fedora, mint), after a fresh install and updates, you can look at the total size of the OS by using gparted to see the size of the partition OR do a du -s -h / and subtract the results from du -bytes /.
  • for mac OSX, I have no idea what you check, but you need UNIX tools I should think, and it's probably the same as for linux.
2000:
XP SP3 with updates thru 10/26/2012:
	55644552831=(9136223214) \windows
	+(33568347408) \program files
	+(5423434060) \documents and settings\all users
	+(5777032242) E:\Documents and Settings\someuser\Application Data
	+(1739124002) E:\Documents and Settings\someuser\local settings
	+(46847) E:\Documents and Settings\someuser\start menu
	+(345058) \documents and settings\all users\start menu
	-----TOTAL:
	(55644552831)
http://www.xinotes.org/notes/note/206/
Vista Ultimate with updates through 2012: 42GiB=45097156608 = windows + programfiles + users
7 Ultimate with no updates through 2012: 20GB=20e9 = windows + programfiles + users
(897156) c:\intel
(0) c:\MoTemp
(132272) c:\MSOCache
(262144) c:\PerfLogs
(3580556384) c:\Program Files
(8846994877) c:\program files (x86)
(1899268823) c:\programdata
(0) c:\config.msi
(0) c:\recovery
(0) c:\system volume information
(232353255481) c:\users
(24398657958) c:\windows
-----TOTAL:
(271080025095)
8:18681490576=windows 8827203154+users 8571839045+progfiles 1282448377
8.1:
http://www.1nova.com/blog/2008/10/how-to-check-os-x-disk-usage/
Mac OSX:51e9
Ubuntu: 15GB=15e9
Linux Mint: 5058052096
http://www.linuxbsdos.com/2012/06/19/fedora-17-kde-review/
Fedora:3.5e9
http://doc.opensuse.org/documentation/html/openSUSE/opensuse-startup/art.osuse.installquick.html#sec.osuse.installquick.sysreqs
OpenSUSE:3e9

os writes/reads ratio

you should only think about hybrid drives for 2000, XP or Vista.

  • for windows, this is the report average size of disk bytes write/minute divided by disk bytes read/minute for the windows system partition (usually C:), averaged over about 15-20 minutes while you are using the machine normally (I was using for web browsing).
  • for linux flavors (ubuntu, opensuse, fedora, mint), after a fresh install and updates, you can look at ??? don't know how to get performance data/stats. you would want to do this on /
  • for mac OSX, you can look at ??? don't know how to get performance data/stats. you would want to do this on /
2000:
XP SP3 with updates thru 10/25/2012: 1.365337054285238298581 = 57040.833 / 41777.840
Vista Ultimate with updates through 10/2012:
7 with updates through 10/2012:
8: 0.908898713419487833605 = 623318.135 / 685794.936
Mac OSX:
Ubuntu:
Linux Mint:
Fedora:
OpenSUSE:

os writes/reads ratio

you should only think about hybrid drives for 2000, XP or Vista.

  • for windows, this is the report average size of disk bytes write/minute divided by disk bytes read/minute for the windows system partition (usually C:), averaged over about 15-20 minutes while you are using the machine normally (I was using for web browsing).
  • for linux flavors (ubuntu, opensuse, fedora, mint), after a fresh install and updates, you can look at ??? don't know how to get performance data/stats. you would want to do this on /
  • for mac OSX, you can look at ??? don't know how to get performance data/stats. you would want to do this on /
2000:
XP SP3 with updates thru 10/25/2012: 57040.833
Vista Ultimate with updates through 10/2012:
7 with updates through 10/2012:
8: 623318.135
Mac OSX:
Ubuntu:
Linux Mint:
Fedora:
OpenSUSE:

NAND FLASH SIZES: powers of 2 in floating point format

2:GiB, 4:GiB, 8:GiB, 16:GiB, 32:GiB, 64:GiB, 128:GiB, 256:GiB, 512:GiB, 1:TiB, [2:TiB, 4:TiB, 8:TiB, 16:TiB]

60:GiB, 120:GiB, 240:GiB, 480:GiB, 640:GiB, 960:GiB, [1,920:GiB, 3,840:GiB, 7,680:GiB, 7,680:GiB, 15,360:GiB]

list updated 6/10/2013

hard disk sizes in floating point format

1GB=1e9  640GB=640e9  500GB=500e9
1TB=1e12  2TB=2e12  4TB=4e12
1PB=1e15  2PB=2e15  4PB=4e15
(we have not gotten here yet as of 6/21/2011)

technology types and max number of erase/write cycles per block

SLC FLASH=100000=100e3
Micron Enterprise eMLC FLASH=30000=30e3
http://www.micron.com/products/nand-flash/choosing-the-right-nand
typical MLC=5000=5e3
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCEQFjAA&url=http%3A%2F%2Fwww.theregister.co.uk%2F2012%2F06%2F21%2Ftlc_threshold%2F&ei=IHuLUNefDbGxigKeh4HgAg&usg=AFQjCNF3RkO-S-ewWKhBzNezQr1xcC_qTQ&sig2=9xJhV7yiQOvwi1uMpDUbuA
TLC FLASH=800  (comes under heading of MLC)

filesystem overhead

interesting initial filesystem overhead blog article. This could apply, but I am not sure how. it might be more applicable to list overhead for clusters or inodes of data I should think.

FAT12: http://www.angelfire.com/scifi/hardware/ref/fdd.htm
FAT16 and FAT32: http://hjohn.home.xs4all.nl/SFS/spaceeff.htm
*maybe, last ditch* http://rwmj.wordpress.com/2009/11/08/filesystem-metadata-overhead/
these can be expressed as a percentage:
-----windows
FAT12=0.011591148577449947=16896/1457664
FAT16=0.22900390625 = 469*2^20/(2*2^30)
FAT32=0.2294921875 = 470*2^20/(2*2^30)
www.ghacks.net/2009/01/29/windows-xp-exfat-file-system-driver/
exFAT(FAT64)= 0.352557450395575316820 = (96*1024)/(4*2^30)+(86.8-56.2)/86.8
vFAT=
http://www.oocities.org/svenfoucault.geo/hpfs.html (assuming a linear projection)
http://66.14.166.45/whitepapers/datarecovery/filesystems/hpfs/Inside%20HPFS%20Part%201.pdf
NTFS(HPFS)=0.125
-----*nix
http://glandium.org/blog/?p=1051
btrfs=0.25
http://unix.derkeiler.com/Newsgroups/comp.unix.aix/2006-03/msg00432.html
jfs=0.04
ext4=0.015562640842293545962 = (23.28*2^30-24607694848)/(23.28*2^30)
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&ved=0CDkQFjAD&url=http%3A%2F%2Fsubdude-site.com%2FWebPages_Local%2FRefInfo%2FComputer%2FLinux%2FDiskFormatting%2Flinux_disk_formatting.htm&ei=7FqLUJX0K4nCigKWlIDoCg&usg=AFQjCNFLglm8NSDJMI-s8742lT8ElBV9AQ&sig2=naBZhPujoZoTZaQNlAzeAg
ext3=0.05
http://www.linuxmisc.com/1-linux-setup/d73d7095ad09afb2.htm
ext2=0.05
----mac osx
http://www.mentby.com/Group/zfs-discuss/zfs-and-vmware.html
zfs=0.25
hfs+=0.08333333333333333333333
hpfs=0.00098304 = 1-0.9898 or (2.5+5)*2^20/10000*2.5e6 / 2e12
http://help.lockergnome.com/windows2/Calculating-NTFS-overhead--ftopict256157.html

I have no earthly idea how the article author got 0.7
on my system, TLC Samsung 840 pro 256GB SSD:
S.M.A.R.T. info:
idle 2,422,996,992 write bytes/day or 591,552 blocks/day
ending with total 2036326642 LBA's written and
starting with total 2036301994 LBA's written,
for 1 hour and multiplied to 24 hours.
hard drives do not have write information. SSD's do, since this is important and hybrids may too.
this is (2,422,996,992Bytes/day)/((256*2^30Bytes)*(800writes/block))*100%=0.001101851463%/day or 0.402442%/year.
at this rate, it would totally fail in ((256*2^30)*(800))/(2,422,996,992)/365.24224=248.48 years
but most people want to install an OS on the drive for extra speed,
so subtract the OS installation size from the total lifespan (disk size * max PE Cycles/block)
so after OS install, it's
(2,422,996,992Bytes/day)/((256*2^30Bytes*800writes/block - 20*2^30))*100%=0.0011019590765%/day or 0.402482%/year.
at this rate, it would totally fail in ((256*2^30*800 - 20*2^30))/(2,422,996,992)/365.24224=248.458 years
calculations done using ttcalc.

using S.M.A.R.T Monitoring Tools (smartctl.exe which I think is a port from *nix) and ttcalc:
over the 4831 hours lifespan of my ssd, I have 2036298901 blocks written. estimating that the drive can last (256GiB*800TLC_PEwrites/block-(2036298901TOT_LBA_WRITES*4096bytes/block))/(2036298901TOT_LBA_WRITES*4096BYTES/BLOCK/4831hours)/24hours/day/365.24224days/year=years

I get (256*2^30*800-(2036298901*4096))/(2036298901*4096/4831)/24/365.24224 = 13.979 years.

(2036298901TOT_LBA_WRITES*4096bytes/block)/4831hours*24hours/day=(2036298901*4096)/4831*24=41,435,795,314.407,783 write bytes/day

articles
contains the equation (roughly) I use for determining lifespan in years.
This could apply, but I am not sure how. it might be more applicable to list overhead for clusters or inodes of data I should think.