Social visualization systems have emerged to support collective intelligence-driven analysis of a growing influx of open data. As with many other online systems, social signals (e.g., forums, polls) are commonly integrated to drive use. Unfortunately, the same social features that can provide rapid, high-accuracy analysis are coupled with the pitfalls of any social system. Through an experiment involving over 300 subjects, we address how social information signals (social proof) affect quantitative judgments in the context of graphical perception. We identify how unbiased social signals lead to fewer errors over non-social settings and conversely, how biased signals lead to more errors. We further reflect on how systematic bias nullifies certain collective intelligence benefits, and we provide evidence of the formation of information cascades. We describe how these findings can be applied to collaborative visualization systems to produce more accurate individual interpretations in social contexts.