It's complicated and depends on the bitrate of your source content as well as your hardware display. To simplify matters I would just try and make sure that my hardware is capable of hardware decoding everything. You can achieve this with an Intel iGPU using QuickSync, and you can check generational format support
here. I'm personally holding out for AV1 hardware decode for additional futureproofing which means I am waiting for the Tiger Lake NUCs. If all of your rips are 10-bit HEVC, then you can get away with hardware decoding on anything newer than Apollo Lake.
If you are using an older CPU you can always pair it with a GPU that supports hardware decoding additional formats. For Nvidia this is called PureVideo (HEVC 10-bit is Feature Set F+) and AMD's is UVD (HEVC 10-bit is UVD 6.0+).
The other concern is your display. Is this a capable HDR display (local dimming, high contrast, etc.)? If not, the display will perform tonemapping which is typically quite crappy. Luckily you can perform tonemapping in madVR for much better results, although it requires some serious GPU horsepower (at least the equivalent of a GTX 1650, depending on the source content) and some fiddling.
This all assumes that your content is 10-bit HEVC Main-10 profile rips. If it is not, then there are many other considerations to make regarding software decoding and upsampling.