Skip to content
This repository has been archived by the owner on Dec 3, 2018. It is now read-only.

Speed up builds #64

Open
wilhelmberg opened this issue Oct 12, 2015 · 4 comments
Open

Speed up builds #64

wilhelmberg opened this issue Oct 12, 2015 · 4 comments
Assignees

Comments

@wilhelmberg
Copy link
Contributor

try a ccache Windows equivalient:

https://github.com/frerich/clcache
https://github.com/inorton/cclash

another compiler besides clang:
http://www.fastbuild.org/

new product from Jetbrains, Resharper Build:
http://blog.jetbrains.com/dotnet/2015/10/15/introducing-resharper-build/

@wilhelmberg
Copy link
Contributor Author

hrm, https://github.com/frerich/clcache#how-clcache-works

I think this pretty much prohibits our use case:
There must be exactly one source file present on the command line.

/cc @springmeyer

@springmeyer
Copy link

There must be exactly one source file present on the command line.

I think that is fine. The mapnik-gyp build only tries to build one .cpp per compile line.

@wilhelmberg
Copy link
Contributor Author

The mapnik-gyp build only tries to build one .cpp per compile line.

Then maybe we should try the exact opposite and use Unity Builds.
Some people are reporting almost unbelievable speed ups that way, e.g.:

http://buffered.io/posts/the-magic-of-unity-builds/

... the build time dropped from 55 minutes to just over 6 minutes

Although they come with draw backs of their own: e.g. need more RAM


Unity Builds:

You create one meta CPP that includes all other CPPs and thus also all HPPs.
So everything has to be read and parsed only once, instead of again and again for every file.

This might even help with the optimization step, that takes the most time atm.

@bmharper
Copy link

bmharper commented Feb 4, 2016

I thought I'd throw this out there:
I've been using tundra (https://github.com/deplinenoise/tundra) for many years now, for a large project of ours (16919 C/C++ files, all compiled from source, spanning 60 3rd party libraries and 50 of our own libraries/programs).
I have yet to find a better build system.

Pros:

  • Fast and accurate header file scanning, which ensures that dependencies are tracked correctly no matter the complexity of the include graph
  • The fastest single-machine build system I've ever seen. A "null build" (ie nothing changed) on the above-mentioned "large project" is 0.57 seconds on my quad core machine. This build consists of 16919 C/C++ source and header files. All phases of the build are parallelized.
  • Can generate IDE projects which act like "NMake"/"Makefile" projects in popular IDEs.
  • Cross platform
  • Supports precompiled headers on MSVC, Clang, GCC.
  • Simple and extendable

Cons:

  • Takes time to create custom build steps that aren't a simple compile or a link.
  • Takes time to port byzantine builds of third party libraries. On the other hand, I have found that most C/C++ library build systems are much more complex than they need to be. If one simply starts with a bunch of C/C++ source files that need to be compiled and linked, with a homogeneous bag of compiler settings, then it's not that far from there to get to a working library.

I realize this is a fringe build system, but I'm trying to spread the word. It's a super high quality software product, and has certainly improved the life of all of the developers at our company that need to build from source.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants