Scams are rife on social media. Efforts have been made to protect users, but these have largely put the responsibility for avoidance on users. For example, information to make users more aware of scams, and Facebook's scam reporting tool. However, user behaviour and attitudes towards scams on social media is not well understood and so it's not clear that such initiatives are sufficient to protect users.
Using mixed-method research we find that although users have high levels of concern about scams these are typically not top of mind because of the social environment, awareness of scams being limited, and little evidence of a clear relationship between self-assessed confidence in spotting scams and actual ability. We also found a misalignment between the high expectations that some users of Facebook have about its systems and processes to prevent, identify and remove scam-enabling content and what processes actually exist, and this could lead to users doing less than they might to protect themselves. Further, there is limited awareness and use of Facebook's scam reporting tool among users.
These findings imply that policies that put responsibility on users to protect themselves are unlikely to sufficiently reduce harm and instead greater responsibility needs to be taken by social media platforms themselves to protect their users. We therefore make a number of recommendations for how Facebook and other platforms can improve their systems to protect their users from scams.
We also recommend that the Government introduce legislation to give social media platforms a legal responsibility for preventing scam content from appearing on their sites.