I came up with a way to optimize project allocation by changing only the client without needing the server to make any changes. If the client downloads a project that does not fully utilize the GPU, it can then download a second one to run in parallel with MPS. Because it gets to find out the project ID before it officially accepts the assignment, it could reject projects that are extremely large. To prevent infinite loops from causing server stress if there are only big projects available, it could give up and accept whatever project given after 5 tries.
Basically, a dynamic project blacklist that contains only the utmost largest projects (atom count is probably a good enough proxy for utilization) and is only enabled if the GPU is already folding one medium-sized project. Because there are not many massive projects or super wide GPUs in use, it will not cause any unbalance in project completion rate for researchers.
I don't have a (non-laptop) 4090 or 5090 to test it out, but if someone who has one and is running Linux would like to test out a script that benchmarks various project sizes to determine how well MPS scales with FAH (which should take under an hour), send me a message!
Multiple projects on one GPU with MPS
Moderators: Site Moderators, FAHC Science Team