Cliff Click's thesis entitled Combining Analysis, Combining Optimizations, represents a program not as a control flow graph, but as a data flow analysis graph. I've been reading it, and have written a summary. If you're interested, download it here. This compilation method is used in the Java Hotspot Server Compiler.
Thanks to Christian Wimmer for proofreading and feedback.
Cliff Click himself emailed me and has a few corrections. His comments are below:
- Performance of "-server" is typically 30% faster on X86's and 50% faster on most RISCs over "-client". This ratio has remained roughly constant for a decade, but it "drifts" up and down over time depending on the "RISC'yness" of the lastest X86's and/or optimizations done with either compiler.
- Startup costs of the server compiler are mostly a non-issue for servers & desktops, and 30% more throughput is a Big Deal. Small (mobile) devices it remains an issue.
- "The Scheduler is often a performance bottleneck". The Scheduler has *never* been a performance bottleneck. The aggressive register allocation done with the server compiler is the main compilation cost (assuming that by "performance bottleneck" you mean "compilation speed").
- I find this Sea of Nodes structure *easier* to debug than traditional compilers, and I've worked on both for more than 30 years. To each his own.
- There are NO "exception edges" in the sea of nodes, so they cannot dominate. Instead, there's some classic control-flow edges inserted to match the Java semantics, but even they do not dominate. The most common set of edges & nodes are related to memory; the server compiler essentially uses a fully exposed memory alias representation, and a large count of Phinodes result from that.