Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Last time I looked, it basically ran per-file "cc -E" on the source machine to get a compilation unit (optionally checking for a ccache cache hit at this point), then piped the result to "cc" running on the target machine, and copied the resulting object file back.

> Does performance scale linearly with the number of worker nodes

Yes, for small N.

Overall scaling was limited by how much "make -j" the source machine could cope with.



This was the original approach, although later work added an optional "pump mode" in which headers are distributed: https://manpages.ubuntu.com/manpages/bionic/man1/distcc-pump...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: