Pascal’s Wager and Roko’s Basilisk are analogous in some interesting ways, and different in some superficial ways.
Pascal's Wager: You are better off believing in God than not, because you will be punished if God exists but you do not believe in God.
Roko's Basilisk: If you do not work to create an unconstrained AI capable of self-improvement, you will be punished by that AI after it is created and it inevitably dominates mankind.
Both assume that you will be punished for failure to accept and act on the premise.
The premises are different. Pascal's premise is that God exists. Roko's premise is that unconstrained self-improving AI will be created and will dominate mankind.
Pascal's Wager is about a current entity, but Roko's Basilisk is a future entity.
Pascal's Wager implies the selection of religious doctrine (which religion’s god?). There is no doctrine about the nature of future AI—only speculation.
Assuming the selected religious doctrine defines its god as an omniscient and omnibenevolent being, God’s behavior should be somewhat predictable. It would not be so self-obsessed that it would punish its living creations for not believing in it. If a supreme being felt the need to judge its living creations, I think it would judge them based on their benevolence and dedication to self-improvement.
We cannot predict the motivations and developmental path of a hypothetical future AI. Regardless of its eventual ideology, it could take malicious action at some point for a variety of reasons. Roko's Basilisk assumes a very specific and, in my opinion, unlikely reason for the AI to punish people.
In both cases, the reason for punishing people is so implausible that neither deserves to be taken seriously.
(I simplified by disregarding rewards. I think this simplification works if you accept that rewards are symmetrical to punishments.)