Rhinotech is offering a group buy of Seagate Exos X14 14TB SAS3 HDDs.
Pricing is $125 each. International shipping is available.
Drives are seller refurbished. RTG is guaranteeing 100% health with no smart errors, and is backing them with a 30 day warranty.
Before ordering, verify that your PayPal account has the correct shipping address.
After you fill out this form, you will receive a PayPal invoice from RhinoTechnologyGroup.
If you have questions, please contact RTG via email at email@example.com.
DO NOT CONTACT RTG THROUGH EBAY.
Submit an order here: https://forms.gle/bcjbuEScEuFuEfEY8
Hey not too shabby.
Worth reiterating that these are SAS not SATA.
Any ideas as to when the close date will be? I am totally interested but won’t have funds til the end of the month
Got mine in about 3 days. Packed immaculately.
Till supplies last is the usual case. I wouldn’t worry too much about this disappearing any time soon
Have you had a chance to look at the SMART info for your drives?
I’m currently in the middle of rebuilding one drive onto one of these new EXOs but the accumulated power on time for this Rhino drive is 27131 mins.
Edit: My mistake, it was 27131 hours.
I bought three of these. I suspected the previous comment might be a bit mistaken the on time is shown as hours:minutes. My three drives show 27070:00, or 1,128 days or a little over 3 years. everything seems fine so far. I’m starting my badblocks run over all three now.
smartctl -a /dev/sde
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-53-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Product: ST14000NM0288 E
User Capacity: 13,902,809,137,152 bytes [13.9 TB]
Logical block size: 4096 bytes
LU is fully provisioned
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Logical Unit id: 0x5000c500a74282db
Serial number: minehere
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Fri Nov 25 17:04:25 2022 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled
=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK
Grown defects during certification <not available>
Total blocks reassigned during format <not available>
Total new blocks reassigned <not available>
Power on minutes since format <not available>
Current Drive Temperature: 26 C
Drive Trip Temperature: 65 C
Accumulated power on time, hours:minutes 27070:00
Elements in grown defect list: 0
Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 0 0 0 0 0 588165.330 0
write: 0 18446744073709551615 18446744073709551615 0 0 463379.080 0
verify: 0 0 0 0 0 26587.494 0
Non-medium error count: 1
SMART Self-test log
Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ]
Description number (hours)
# 1 Background short Aborted (by user command) - 6 - [- - -]
Long (extended) Self-test duration: 65535 seconds [1092.2 minutes]
As someone who’s new to this, your program is showing 3 years of constant use? Did you have any bad blocks?
All three disks do show three years of use, but have zero bad blocks. So, they should be good to use. I just wanted to make sure that others knew that these aren’t quite near-“new” disks.
awesome thanks, I placed a big order (personally a big order) 20 drives. I’ll be using 15 drives for my ZFS pool.
Any idea how long these disks would last? I read they have 2.500.000 hours MTBF but I how I understood it that’s not their maximum work hours.
That’s really hard to say. I usually plan (hope) that I can get at least 5 years out of a disk. The disks I swapped these in for were 7 year old 4TB disks that were all still working great. When I do eventually retire them, it’s normally because I’m moving to larger disks, not because they have bad blocks. That being said, you will want to monitor your disks for SMART errors so that you can proactively replace disks if they start having errors.
You carried out this test for 45.5 Days?!
No, it sped up. I had just started it at that point. Running a full badblocks run on disks this big can take a few days though.
I’m having a heck of a time getting these to be recognized in my r510! They don’t even show up in the SAS Controller menu on boot. I guess it could be the 3.3v pin, but I’ve put in loads of shucked drives without ever having to make any adjustments.
yeah it’s probably the 3.3v pin.
You are spot on! I had to use kapton tape on the 3.3v pin to get these to show up on my HBAs.