She loved E
Limp Gawd
- Joined
- Dec 30, 2012
- Messages
- 130
For sure, if it was for personal use I'd consider it. Maybe if the work NAS goes well I'll take what I learn and do a custom home media server.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
I am using a FreeNAS box now. There was a learning curve, and it is still not 100% bug free. While there are many positive qualities of my setup, for commercial application, I'd look elsewhere for a solution with support.
That's just not true. There is no vendor lock in. I already addressed this earlier: https://hardforum.com/threads/nas-for-small-business.1979773/#post-1044177514Well, when talking about 'support', the nice thing with ZFS is that you can just shove it into a box. You're not dependent on a vendor to access your data; if a QNAP or Synology box breaks, you're reliant on them to get to your data.
With ZFS, something breaks, put the drives in something else. Boot up a USB stick, hell, put them into Windows and boot up a ZFS-supporting VM.
But then there's the whole building it and stuff... so yeah, Synology.
If anything, recovering from one of those appliances is arguably easier than ZFS due to the larger availability of tools for Linux on account of it being far more used. Synology even provides a guide: https://www.synology.com/en-us/know...I_recover_data_from_my_DiskStation_using_a_PC
And really this is only a question of budget. Because any type of downtime costs, and depending on how fast you need to be back online, having a vendor to point at yell at and throw money at can actually be the cheapest and easiest solution in the long run.Well, when talking about 'support', the nice thing with ZFS is that you can just shove it into a box. You're not dependent on a vendor to access your data; if a QNAP or Synology box breaks, you're reliant on them to get to your data.
But to those of us that don't breathe linux/unix, it might as well be proprietary.Synology doesn't have any proprietary OS. It's just Linux and LVM/mdadm are the same as anywhere else. You can be back up and running a few minutes from a live CD/USB image.
If your data is important, I would only buy drives with a 5yr warranty--period. I've found the less warranty drives are lesser drives.
Also, I wouldn't run raid5 unless it was necessary as with 2 simultaneous drive failures (any array size) will lose all your data. I really like the idea of running 8-10TB drives raid1--just be sure to get 5yr warranty enterprise class drives or again you're in the same boat with a simultaneous 2 drive failure with 100% data loss. Since the external versions of these are cheap too, I'd get 2+ of those and use them as off-site backups in rotation.
I like to mix and match manufacturers too. I'll get a HGST, a WD, and a Seagate product. This way, you're not having to worry about batches of bad drives or design issues.
MPIO works in place of LACP for unmanaged switches if the OS supports its (not in this use case though)1) LACP and unmanaged switches don’t mix
Well. I wouldn't call it a lot in absolute terms, but in a percentage it's a thing. So, I have probably 32 drives performing storage in total. If even one drive fails, that's an unacceptably high failure rate.It's interesting that you've seen a lot of failures of enterprise drives. It all depends on a lot of factors--temperature, use, vibration, etc play a factor. There's some really great data out there by backblaze, et al on which drives are the most solid and which drives are not in their data centers, and that really can help you steer clear of drives with a higher failure rate. Although if your luck is bad, you could be that lucky one person that gets the .01% failure drive.
I don't think it's a bad thing to keep the same drives in an array--in fact I think that's the best thing to do when striping. (Back in the days of raid2, that was a must as all the spindles were actually lock synced.) But for mirroring (raid1) since there's less complicated methods working in the raid level (no parity calculation, no stripe calculation, etc), drives by different manufacturers doesn't pose as much risk to raid problems.
Joust how much storage is that?Well. I wouldn't call it a lot in absolute terms, but in a percentage it's a thing. So, I have probably 32 drives performing storage in total. If even one drive fails, that's an unacceptably high failure rate.
In the last 7 years, I've lost 3 drives. One was DOA, so that's a thing. One got hot because I made a mistake. I'll take the blame on that one. The other died earnestly. Still, that's 3 of 32. One through my own fault.
Edit: the one that got hot died two months later.
Not enough. Never enough.Joust how much storage is that?
Wowzers...Not enough. Never enough.
It's currently approximately 104 TB.
I hadn't actually considered it. Strangely enough.Wowzers...
Consider renting some of that storage space??? Siacoin for instance...
Hmmm...based on that metric, I've also seen some high failures on a percentage basis--one WDC RE out of a batch of 6. It was replaced under warranty and then those drives were cycled out to some HGST which are about to be cycled out to another set of HGST and maybe some Seagates (haven't decided on this one yet since the HGST seem to be quite nice.)Well. I wouldn't call it a lot in absolute terms, but in a percentage it's a thing. So, I have probably 32 drives performing storage in total. If even one drive fails, that's an unacceptably high failure rate.
In the last 7 years, I've lost 3 drives. One was DOA, so that's a thing. One got hot because I made a mistake. I'll take the blame on that one. The other died earnestly. Still, that's 3 of 32. One through my own fault.
Edit: the one that got hot died two months later.
ebay-sourced S2500
Started setting everything up & may have an issue with my ebay-sourced S2500. When I plug it in the fans spin up, all port LEDs flash, but I get nothing from the power, status or stack LEDs or the LCD screen. Fans just spin as long as its plugged in. Does that mean its DOA? Or is there anything I can try to see if its responsive?