Reposted from Adweek
Twitter discovered and fixed issues with users’ ad settings choices that first surfaced in May 2018, which may have shared some user data with third-party advertising and measurement partners without permission, and may have resulted in people seeing ads based on inferences they did not indicate in their personal settings.
The social network revealed in a help center post late Tuesday that starting in May 2018, if a user clicked on or viewed an ad for a mobile application and went on to interact with that app, data such as that user’s country code, if and when they engaged with the ad and information about the ad may have been shared with Twitter’s trusted measurement and advertising partners, without the consent of that user.
Gartner vice president and distinguished analyst Avivah Litan said in an interview, “Most companies collect this type of data without clearly and explicitly asking users for permission for all of the data collection and sharing that they do with the data they collect. The only clear regulations that companies must follow in this regard is the [European Union’s General Data Protection Regulation], which covers data privacy for European users.”
GDPR went into effect May 25, 2018, the same month the first issue reported by Twitter emerged, but there was no clear connection between the two events.
She added that it was unclear whether Twitter failed to comply with GDPR, but the company’s post appeared to be intended to satisfy regulators.
Eve Maler, vice president of innovation and emerging technology at ForgeRock and co-creator of XML (Extensible Markup Language), said in an interview, “The worst part regarding user trust is that privacy settings weren’t respected. If you think of the modern world of data privacy as a pyramid of what businesses can do, at the bottom, the most important thing businesses can do is that they need to protect data, keep it safe, don’t let it out accidentally.”
John Biondi, vp of experience design at digital business consultancy Nerdery, agreed, saying in an email, “Putting huge tech platforms in possession of user data and asking them not to sell it is a bit like asking a starving person to hold your cheeseburger for you … In Twitter’s case, it had nothing to do with getting users to opt in or opt out. Twitter users had explicitly opted out, and their tracking data was still sold.”
Starting in September 2018, Twitter users may have been shown ads based on inferences the social network made about their devices, even if permission had not been granted. Twitter said this data stayed within Twitter and did not include information such as passwords or email addresses.
Twitter said in its post that both issues were corrected Aug. 5, and it is still investigating how many people were impacted. The social network added that concerned users can contact its office of data protection via this form.
A spokesperson shared the following statement: “We take these issues seriously and, whenever an issue arises, we conduct a review to ensure that we make changes to prevent these types of issues recurring. In this case, we have deployed a fix, which has corrected the data-sharing-authorization signals that caused this issue.”
Maler said transparency is the middle layer of the data privacy pyramid, and until Twitter concludes its investigation, there is a bit of a lack of clarity.
However, Matt Conlin, co-founder and president of performance marketing company Fluent, differed, saying in an email, “Twitter did the right thing by addressing the breach in a timely manner with users. By being transparent and owning up to what happened, it is demonstrating a consumer-first attitude that will have an impact on retaining trust, which is key in our increasingly privacy-centric world.”
The social network encountered a similar issue in May in the form of a bug that may have caused it to inadvertently collect location data from its iOS users and share that data with one of its trusted partners.
Twitter said at the time that the bug only affected people who opted in to the precise location feature on one account but use other accounts on the same devices for which the feature was not turned on, adding that the data was no more precise than a five-square-kilometer area within a ZIP code or city, and it was only retained on its partner’s systems for a short time.
Litan suggested that people who are concerned about these types of data breaches use ad-blocking technologies or browsers like Brave to disable data tracking, adding, “Eventually, with the advent of Web 3.0 technologies, we will see more widespread user control and ownership of their identity and personal data so that these privacy invasions and abuses are minimized.”
On the business side, Maler said, “It’s really relevant to ask how big organizations manage their relationships with their end users. How do you maintain trust when those end users have choice? How sticky is the relationship? Look at the intersection of user transformation and digital trust risk and identify those intersections. Conceive of personal data as a joint asset. Lean in to consent, but implement it correctly.”
The above Adweek article features commentary from Matt Conlin, President and Co-Founder of Fluent.