Supercompiling works by evaluating as much of the program as possible at compile time. This can provide massive runtime speed-ups because you can eliminate a lot of time-consuming intermediate steps. You try to evaluate terms as much as you can (before, say, blocking happens). Deciding how much you can evaluate, and deciding on the criteria for termination of evaluation, are choices that drive the design of the supercompiler.
An example of this is the Supero optimizer for Haskell's GHC compiler. My answer paraphrases Neil Mitchell's extensive work:
Rethinking supercompilation
Ultimately, the roots of supercompilation are from Turchin and later, his Refal language:
The Concept of a Supercompiler