@wyatwerp @kennethdodrill @tshrinivasan
depends on how serious the PBO is, but here we go
1. PBO inserts counters in various places like branches, counting
each time one of the two results is taken.
1a. This can also work in theory for more-than-two dispatches
assuming they are used properly. For example in Lisp where
you tend to have cond you have more than once branch point to
1a1. A cond is an if-else ladder; conditions are tested in
order but it's one "form." So its possible to list 5
related if-then-else without as much typing. If order of
tests does not matter (sometimes they do) then recording
which branch is taken most often is useful.
2. You then get a "profile" which says for each branch or dispatch
which path is taken most often. This allows the compiler to do
some trickery where if, say, the CPU has a bias towards wanting
to follow true paths it can re-write those branches so the more
common path is the true path (by say, inverting some conditions.)
3. But this information is lost to the guided profile files. They
are not deterministic. So builds of say Firefox start to require
data that is learned at test-running time.
3a. This is a problem for reproducible builds; since the profile
requires you to go do stuff, which could be indeterminate, as
say loading ten random web pages would not be reproducible.
4. This concept actually is more powerful than this. If you look at
GraalVM what it does is make a tree of programs and then it
counts things like which branches are most common. It even looks
at which types are most commonly used in some special case. It
learns which functions are used most often in certain paths.
5. That profile information in Graal is then fed back to the
compiler. So in say a dynamic language, it learns that some
specific types are the two most common. So it inserts
pre-dispatches for those two types and cuts out a lot of
overhead (it no longer has to do a full dispatch for the most
6. But in all these systems you don't really get information back
to the programmer. You don't really get to explore the
optimizations to see which types are rarely instanced or which
code path is most common.
6a. I mean yes, there are some specific profilers and things. But
it tends to be a lot of overhead and task-specific. You can't
just do work in the program for a week and then come back and
go "ah, we need to make all these changes here and there."
basically optimizers are one-way black boxes; developers don't learn how to make better software because the optimizers don't communicate back what they actually did.