TY - GEN
T1 - Profanity use in online communities
AU - Sood, Sara Owsley
AU - Antin, Judd
AU - Churchill, Elizabeth F.
PY - 2012
Y1 - 2012
N2 - As user-generated Web content increases, the amount of inappropriate and/or objectionable content also grows. Several scholarly communities are addressing how to detect and manage such content: research in computer vision focuses on detection of inappropriate images, natural language processing technology has advanced to recognize insults. However, profanity detection systems remain flawed. Current list-based profanity detection systems have two limitations. First, they are easy to circumvent and easily become stale-that is, they cannot adapt to misspellings, abbreviations, and the fast pace of profane slang evolution. Secondly, they offer a one-size fits all solution; they typically do not accommodate domain, community and context specific needs. However, social settings have their own normative behaviors-what is deemed acceptable in one community may not be in another. In this paper, through analysis of comments from a social news site, we provide evidence that current systems are performing poorly and evaluate the cases on which they fail. We then address community differences regarding creation/tolerance of profanity and suggest a shift to more contextually nuanced profanity detection systems.
AB - As user-generated Web content increases, the amount of inappropriate and/or objectionable content also grows. Several scholarly communities are addressing how to detect and manage such content: research in computer vision focuses on detection of inappropriate images, natural language processing technology has advanced to recognize insults. However, profanity detection systems remain flawed. Current list-based profanity detection systems have two limitations. First, they are easy to circumvent and easily become stale-that is, they cannot adapt to misspellings, abbreviations, and the fast pace of profane slang evolution. Secondly, they offer a one-size fits all solution; they typically do not accommodate domain, community and context specific needs. However, social settings have their own normative behaviors-what is deemed acceptable in one community may not be in another. In this paper, through analysis of comments from a social news site, we provide evidence that current systems are performing poorly and evaluate the cases on which they fail. We then address community differences regarding creation/tolerance of profanity and suggest a shift to more contextually nuanced profanity detection systems.
KW - Comment threads
KW - Community management
KW - Negativity
KW - Online communities
KW - Profanity
KW - User-generated content
UR - http://www.scopus.com/inward/record.url?scp=84862074729&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84862074729&partnerID=8YFLogxK
U2 - 10.1145/2207676.2208610
DO - 10.1145/2207676.2208610
M3 - Conference contribution
AN - SCOPUS:84862074729
SN - 9781450310154
T3 - Conference on Human Factors in Computing Systems - Proceedings
SP - 1481
EP - 1490
BT - Conference Proceedings - The 30th ACM Conference on Human Factors in Computing Systems, CHI 2012
T2 - 30th ACM Conference on Human Factors in Computing Systems, CHI 2012
Y2 - 5 May 2012 through 10 May 2012
ER -