Users of social, economic, or medical networks share personal information in exchange for tangible benefits, but may be harmed by leakage and misuse of the shared information. This paper analyzes the effects of privacy enhancements on the tradeoffs faced by such privacy-concerned individuals. The main insights are that different privacy enhancements may have opposite effects on the volume of information sharing, and that although they always seem beneficial to non-strategic users, privacy enhancements may backfire when users are strategic. The observation that privacy regulation may be harmful is not new, and the burgeoning empirical and experimental literature on the topic has shown that the effects of regulation may be positive or negative, depending on the context . The theoretical literature on privacy has its roots in the work of  and , who derive a similar conclusion in a signaling context: under stronger privacy regimes individuals can more readily hide negative traits, which may be harmful to other market participants and to social welfare. This paper's goal is to identify properties of interactions that determine the effects of various privacy regulations. The point of departure is the observation that the conception of privacy commonly studied in the theory literature|namely, as a technology for altering the signaling capabilities of individuals|misses a key dimension of privacy harm: A concern not about third parties' inferences about individuals' types, but rather about misuse of the information itself. identity theft, spam, harassment, stalking, re-identification, online tracking, excessive profiling, and targeted advertising care less about whether the information leaked about them is positive or negative, and more about the fact that information has been leaked and misused in the first place. We frame the paper within the context of online social networks, but describe its applicability in other contexts as well. In the model, agents are not concerned about what the leaked information signals about their type, but rather about the quantity of personal information leaked. As a preliminary formalization, consider a user of a social network who wishes to share some information. The user derives a benefit from sharing information on the platform, a benefit captured by an increasing function of the amount of information shared. However, there is some chance of information leakage and misuse, in which case the user incurs a cost, captured by another increasing function of the amount of information shared. The user thus faces a tradeoff between benefit and cost. Is a privacy enhancement, in the form of lowering , beneficial to the user? The answer is easily seen to be positive.