Last time I looked, it basically ran per-file "cc -E" on the source machine to get a compilation unit (optionally checking for a ccache cache hit at this point), then piped the result to "cc" running on the target machine, and copied the resulting object file back.
> Does performance scale linearly with the number of worker nodes
Yes, for small N.
Overall scaling was limited by how much "make -j" the source machine could cope with.
> Does performance scale linearly with the number of worker nodes
Yes, for small N.
Overall scaling was limited by how much "make -j" the source machine could cope with.