I've been trying to reproduce this and check for memory leaks, but I could not find anything substantial, or even any evidence of misbehavior.
I did find a few tiny memory leaks, which I fixed, but all in the range of kilobytes, not mega or even gigabytes.
Syncing with the server does produce an increase in memory requirement - but only a one-time use.
When you sync the library back to the server, MC has to determine a "delta" of what actually changed. We don't store a change date for every single field (that would be insane), so instead we compare our local library to the remote library. To do this, it has to load the entire library completely into memory. This is usually not done outside of syncing, or until you have viewed every field of every file in your library manually. On a very large database, this can be quite some memory.
Once this data is loaded once, it should not grow further beyond that. A second sync process won't load it again, since its already loaded.
All that said, I also found one case where memory was being wasted that was only used conditionally, so I've cleaned that up and a fully loaded library, like in the sync process, should now use significantly less memory.
If this is still going on with the next version, and one of you can reproduce it reliably on a x86_64 system, we might ask you to collect some information for us.