HBA vs. RAID Controller card - 833

preview_player
Показать описание
I long chat about HBA vs. Raid controllers. Trying to explain how this works,,, there are always someone way smarter than me, in the comments, so be sure to check that out,, or be him :-)

[Affiliate Links]
________________________________________________________________
Even just 1$ a month, comes out to the same as Binge-watching like 400+ of me Videos every month.

My PlayHouse is a channel where i will show, what i am working on. I have this house, it is 168 Square Meters / 1808.3ft² and it is full, of half-finished projects.

I love working with heating, insulation, Servers, computers, Datacenter, green power, alternative energy, solar, wind and more. It all costs, but I'm trying to get the most out of my money, and my time.
Рекомендации по теме
Комментарии
Автор

I rewatched this, because I wanted to review this topic. Wow, you are such an incredible Teacher and Educator. Glad that I found out about your channel years ago. You've helped so many people become more knowledgeable about technology.

PoeLemic
Автор

Morten you are just such a great resource for knowledge - much apprieciated that you spent so much time sharing that knowledge, keep up the good work!

WoTpro
Автор

Well I think you covered almost everything useful to any potential new comers. Very nice job done!

thejeffchen
Автор

Nice explanations! I do think you missed 1 think though. With HBAs and ZFS, you won't get corrupt files. You will miss files, but not get corrupt files, due to the copy-on-write mechanism. That is something that's quite important to me :)

BartKuipersdotcom
Автор

Thank you for making this video! It really helped me out a lot I was defiantly confused on the difference before this video. Good thing I bought the right card for what I need Raid 5(its a LSI raid card with battery got it for $8 I also firmware updated it). I just got my 4x 2tb sas drives @$15ea in the mail today. I cant wait to see what kind of crystal mark benchmark I get. I need a better hard drive mounting system Im trying to jam all this into a T3500 workstation.

dogsblue
Автор

This was a good informational video - Doesn't really matter that you spelled Performance wrong !!!!
Keep Up The Good Work !!!!

tchambers
Автор

I'm researching on buying a refurbished server and converting it to a NAS (using free/truenas).This was imensely helfull since truenas needs a HBA and I had didnt understand the differences.
Thanks a bunch

DaLoler
Автор

Yay. Its weekend and there is a new video from you!

strausstechnik
Автор

18:05 So, new hard drive lights your OS on fire in HBA. Gotcha!

joeldoxtator
Автор

HBA cards are just essentially extra ports, and drives are seen by the OS as individual drives. RAID cards are software and hardware that (as you said) presents the drives to the OS in certain PROPRIETARY configurations.


For modern operating systems and for normal uses HBA is less headache and pain for all involved. Very simple and straightforward. RAID cars have a huge problem. If your raid card has a certain firmware, hardware revision also driver version and it FAILS. You have to get the same exact raid card with the same exact firmware, same exact hardware revision and driver version.


Now if an HBA fails you can pretty much use any HBA as long as all the drives the OS are expecting to be there are there. Example is setting up a RAID array with the drives connected to the HBA. Even though the HBA has nothing to do with that raid array the OS is looking for those drives so any HBA can get those drives to show for the OS you are golden.


You touched on the fact that HBA cards have a performance hit to the system resources and the overhead required. GREAT VIDEO

InconsistentManner
Автор

Hello Sir, I have watched and enjoyed your videos for a while. I have a question for you. I have a x3650 M2 Type 7947 my first rackmount server to play with. I believe I have to upgrade the default raid card in order to be able to support using 2.5 2tb+ Drives inside. I also would like to add the additional backplane to make the server run 12 drives. Can please suggest a few internal raid or hba options?

sheldoncyrus
Автор

For FreeNAS, are there any suggestions for HBA Cards ? used ones for under 100 maybe lower ? I saw Dell cards for about 50 bucks but i'm afraid it could be incompatible

MitgliedT
Автор

spinning rust is dead...long live SSD drives :) I hate RAID controller cards, prefer HBA, the video was very fun to watch this morning with my coffee :) Your new title is Dr Whiteboard :)

UnkyjoesPlayhouse
Автор

I picked up a couple of HP branded SAS drives for one of my servers but it turns out they are from a HP 3PAR storage box which uses a custom format of 520 bytes per block making them useless inside a server attached to a SAS HBA or Raid card. The only option to get them working will be to attempt a low level format to change them to 512 bytes per block. Apparently EMC do similar things...

andrewnoonan
Автор

From your video I understand that the BBU on the RAID card is to protect the RAID card cache ram contents when the RAID array crashes becoming non responsive which implies that RAID disk arrays crash regularly - how regularly?

So you are taking away one point of failure by being able to add disk redundancy via RAID card but adding a new point of failure of RAID hang/lockup - do you mean that can happen independently of a random computer hang?

Which makes me realise an uninteruptable power supply alone is not enough if you have a RAID card ram cache.

Approximately how long does a hardware RAID card add to the boot from cold time of a computer - seconds? Or longer?

rianders
Автор

Hi Morten,
thanks for your video, it's been really helpful!!
I'm building a small "Nas" for home use only with Unraid in an HP Microserver G8 I got for free!
I need to get all all drives as HBA, do you think buying a card such as HP H220 or HP H240 would be better than getting a HP P420 and forcing it to go into HBA mode?
With Unraid it's a problem if it cannot get correct temp of all drives, so I think HBA card is better for this usage than a RAID card, but I might be wrong as I'm not an expert!
Thanks!

endystrike
Автор

I think some of your examples might be incorrect in some cases. HW RAID cards can have more problems with data consistency because the HW RAID is not application/data aware and in a way "tricks" the application because it acknowledges to the application that data is safe, when it *might* not be... in your example, the cache has a time limit and is not safe forever. additionally, in cluster application environments, where your application workload might be distributed, if a node with HW RAID failure tells the application it has completed a data transaction, but the data is stuck in the RAID cache and hasn't recovered yet to flush to disk, the application will continue to think the data was committed and the workload may migrate to another node in the cluster assuming, falsely, the data was committed when it was not. From an application standpoint, and part of what ZFS does so well, is that it allows you to handle the low-level storage in a "data aware" way. A very simple example of this is when you do a data consistency check. With HW RAID, it does its checks across every single stripe/logical block, even if the blocks are empty blocks because it doesn't know which ones are actually in use or not. With ZFS, if you do a data consistency check ("scrub" in ZFS terms) on an "empty" data pool, it will finish in less than 1 second because it knows there's no "real data." I know that is somewhat of a impractical example (empty data sets are not very useful), but it helps as a mental exercise to understand how powerful "data aware" storage technology can be.

on the matter of performance, there are a lot of different variables to consider; from the number of I/O transactions, the size of the I/O transactions, latency, the number of times data is copied while moving from source to final destination, etc. performance can be complicated subject, so I'm not saying my tiny comment here is comprehensive, BUT think about this one aspect: typical HW RAID controller is dual-core, maybe quad-core (?) processor that does 1~2Ghz clock speed (although it does have advantage of being specifically designed for purpose of storage I/O) vs typical server CPU resources of 10+ cores (Westmere-EP dual socket was 12-cores) to >50 cores (modern dual socket system), often running 2~3GHz. so, it is possible, at least with modern multi-core servers, that your CPU resources are much more abundant than what you will have in your HW RAID controller. So, for software technologies like ZFS (with HBA card) that use the CPU, you might be able to do a lot more (and ZFS does do a lot more, checksums on all data, compression, etc.) I/O. But, you are right in that software storage like ZFS does take away some of the CPU resources from the applications. HW RAID does have some advantages, because it responds from the controller+cache, transaction latency can be much lower than HDDs, which can be very important for some applications. But if latency is important, you can solve it with SSDs+HBA these days.

another consideration is features. when you buy a HW RAID controller, it come with a fixed set of features, and the manufacturers can really only put so many features into it because all of that has to be programmed into the firmware, which is relatively tiny. Software technologies like ZFS have an evolving set of features. If a new feature is added to ZFS, I can keep the same hardware and upgrade ZFS to get new features. And the new features possibilities are "endless" since software can have access to all CPU cores and all the RAM in system. For example, ZFS by default uses up to 50% of your RAM for L2ARC (cache)... compare that to the 1GB or 2GB of cache available on HW RAID to typical server system RAM (32GB to 1~2TB).

Also, HBA has been around for a long time. Even way back in the days when I was a SunOS or Solaris system admin, I worked on servers with SCSI drives that used software RAID technologies like DiskSuite which had to use SCSI HBAs. But, that was a time before multi-core servers, it was multi-sockets at the time in some systems and CPU resources were less plentiful. So, back then, having HW RAID really did seem superior to the software RAID technologies using CPU; and this was also *before* ZFS came out. But that was a long time ago, and since then, CPU / RAM resources in servers have multiplied many times while HW RAID have not evolved as fast.

As always, love your channel... thanks for making a video about this topic!

ArtofServer
Автор

Happy New Year, really enjoyed your video. Currently working on a supermicro x8dtu-f, 4bay drive 1u server need a hba or raid not sure to hold the OS. Searched your Ebay store but not sure what I need. Any advice would be appreciated, thanks.

jorona
Автор

so if I had a project that was mission critical (absolutely can not lose any data or have any corruption) and speed was necessary redundancy is preferred and the drives are planned to be kept in offline storage after being filled with archival data would I be better off with an hba or a raid controller? would like to use a computer consisting of an amd2400g cpu 16gb ddr4 3000mhz ram and up to 8 - 10tb hdd with a redundant psu and an m.2 ssd for os in a short depth 4u server chassis with a ups on rack and 10 gig nic I have not purchased anything so suggestions are most definitely welcome and if you do suggest something different please give reasons ty all so much!

deathkiller
Автор

About the security comparison: Please read on journaling file systems again :) In ZFS specifically there's the ZIL. Also: How many faulty batteries have you replaced on "hardware" controllers? :)

salat