A person with way too many hobbies, but I still continue to learn new things.

  • 1 Post
  • 41 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • Shdwdrgn@mander.xyztoLinux@lemmy.mlHow long has your PC been on for?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 days ago

    22:57:20 up 70 days, 16:04, 21 users, load average: 1.10, 1.14, 1.02

    Honestly if you were expecting a drive failure in three years, you probably have some other problem. The SSD in my desktop is clocking 7.3 years and I never shut down my machines except to reboot. On my servers, I have run used HDDs from ebay for up to ten years (only retired for upgrades). My NAS is currently running a mixture of used drives from Ebay and some refurbs from Amazon, and I don’t anticipate seeing any issues for at least a few more years.


  • More drives also equals larger power consumption so you would need a larger battery backup.

    It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.

    I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.







  • I associate it with the fight I’ve had every single time I tried to use it. It’s never been a smooth process on any server I attempted to use it on. Usually I either run into problems with a system not wanting to properly boot the memory stick even with a full UEFI image flashed to it, or if I do get that to work I go through the whole installation process only to find the system unbootable for whatever reason. Eventually I just give up and do a standard installation because why should I have to work this hard to put an OS on a machine?









  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldHelp with ZFS Array
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

    zpool offline poolname /dev/nvme1n1p1

    zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

    Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.