An observation - other media players, e.g. WDTV Live and other similar small media streamer boxes, TVs, Bluray players etc etc do not seem to have a problem with the compatibility of DLNA servers they are connecting to (the biggest compatibility problem is with renderers...) nor do they have trouble with speed when interrogating the DLNA server.
Yeah. I think there might be some fundamental misunderstanding here about the two "types" of DLNA servers we're discussing. First of all, as Andrew explained:
Control points like MC must always download the full library before they can use it.
This is just fundamental to the way MC works. It does not use the Tree structure provided by the DLNA server, but builds its own on the fly. That's how you can browse the files by their metadata, not just the fixed structure exposed by the server (like MC
works with its own Library). To do this, it must have access to a full database of all files on the server all at once.
So, to get all of the files, from
servers that support CDS:Search, you just query the entire database as your very first search when you connect to the remote server. This responds with all of the files the server knows about, and you load it into a native Library, and then the user can go.
Servers that don't support searching, and
only support CDS:Browse can't give you a full file listing. They only give you their own fixed "category tree". When you first connect, you don't see a list of files, it'll only tell you about the "top level" categories. It works essentially like an FTP Server (or one that doesn't support search, anyway). The server can't tell you, and your client doesn't know about yet, anything "underneath" this top level structure. You basically get a list of "folders". It isn't until you open one of these folders, that you get the next "tier" in the tree (the next subfolder down), and so on and so forth. To see the files, you have to open the tree of folders until you get a set of files.
If the device receiving this cannot create its own metadata structures for display (like MC can with its Views), then this is fine. It makes it "feel" faster with a dumb, underpowered server, because it doesn't have to spit out very much data at any one time, and the recipient can have very little RAM (it only has to be able to display the contents of the current "folder", and doesn't have to care about anything else it ever saw before). But there's no state either, so going "back up" a tier still requires another client-server-response round-trip. Your client only ever gets information on, and the server only ever has to keep in memory, the contents of one "folder" at a time.
Browsing
into each new nested category is slow (dramatically slow compared to browsing around categories in MC), since, like opening a folder on a FTP server, it has to ask the server for the new folder, and then wait for the server to respond with the new contents listing. But, it never has to give you much at any one time, so the actual data size of the transfers are much smaller than a full file set.
The only way to work around these kinds of "dumb" servers for a client like MC that needs to get the full file listing, is the basically "brute force" it. When the server responds to you with a list of categories, you have to start at the top and then, open each one, one-by-one, wait for the responses, then drill down to the next level, and the next, and the next until you get to the files themselves. Then add the URIs for those files to your "cache list", go back up one node in the tree, and follow the next possible folder. And so on and so forth until you've checked
every possible folder exposed by the server. This is made more difficult and slow (moreso than building a caching FTP client) because the server
isn't showing you folders on disk, where each file is only in one place, but might be showing you the equivalent of different Views in MC, all containing the same files, with different sortings or sets of categories or whatnot. So, you waste a bunch of time getting files you already got, but you can't know that until you open each item in the tree one at a time and wait for the server to spit out its list.
There's no nice way to say it. That's clunky. Still, though WMP does it (of course, they have a huge team of engineers and literally billions of dollars)... But, walking tree nodes like this is a well-understood computer science problem.
Writing the tree-walking algorithm probably isn't too bad, but it might be very hard to do
well, and that's a lot of work to support some
dumb servers. I don't really know how popular they are out there in the real world. It is disappointing that Plex only supports CDS:Browse, but they're probably only targeting "dumb" endpoints.
Still, though, if they actually do have a decent database underneath there architecturally, it would be a heck of a lot easier (and way less clunky) for
them to just expose the searchable database. Plex almost certainly does have a database underneath (I'd guess, knowing nothing about their architecture, mysql or some variant), so they might be able to do it if users ask for it. They might not care about supporting those requests, though (or might think it enables competition, and they don't want to do it on purpose).
The last time that I did measurements, which was several years ago, it took WMP about 3 minutes per thousand tracks to load a library from Whitebear using CDS:Search, and about 6..8 minutes per thousand tracks using CDS:Browse. It would probably be faster on newer machines running Gigabit Etherrnet.
Hrmmm...
That's way worse than I'd hoped. And I imagine you did... well, a fairly good job with Whitebear in the optimization department. I imagine other CDS:Browse services (especially running on underpowered hardware devices like consumer NAS appliances and whatnot) are probably worse, perhaps by an order of magnitude. Some of them probably even crash or are rate limited when you try to walk the tree. Even if they aren't intentionally request-speed gated, with the cruddy CPUs in them, they're probably more limited by their CPUs than by 100Mbps network. I wonder how substantial improvements would be from even Gigabit or faster?
So, cruddy. Giving them the benefit of the doubt on modern improvements and saying speeds have doubled your best case, and calling it 3 minutes per 1000, it would take around 7
hours to load my 142k file library that way. That's obviously way out of the question.