Both Optimistic Concurrency Control (CC) and Snapshot Isolation (SI) use a commit-time analysis phase to decide whether a commit is possible and abort transactions otherwise. This makes them particularly suited for scenarios with low data contention, and one might call them superficially similar. (Feel free to disagree with this statement, but if you do so, explain.)
However, SI is much more popular in implemented database systems than optimistic CC. How can this be explained, given than optimistic CC guarantees serializability while SI does not?
There are relatively easy ways to modify SI to guarantee serializability. Postgres 9.1 implements such a mechanism (see e.g. the Wikipedia page on SI). Does this serializable SI still dominate Optimistic CC in practice (which one would you choose for your DBMS)? Explain.