Policy at scale: It both is and is not about the customer

There’s an old quote sometimes (mis)attributed to Stalin: “The death of one man is a tragedy, the death of millions is a statistic” ​(Wikiquote contributors 2020)​. The same thing holds true when it comes to policy enforcement: A single customer can be watched carefully but hundreds or thousands of customers fall to statistics.

Mass detection

Finding customers who are violating published policies is almost always the scalable issue that we have to discuss. There are many ways to do this:

  1. Complaint monitoring. This is maybe the most basic of methods. You wait for people to tell you that they are receiving messages sent in violation of published policies. As I mentioned in my last post, I generally operate with a rule of thumb that any direct complaint (where the complainant sends an email directly to your abuse queue) happens at a rate of about 1% ​(Chandler 2020b)​. So, any well-formed complaints come with a built-in multiplier.
  2. Feedback Loop detection. This merely builds on complaint monitoring. All that you have done is alleviate the barrier to entry for lodging complaints by offloading that responsibility to the relationship between service providers. The sad truth is, though, that many providers look at feedback loops merely as a method of obtaining and processing opt-outs. So, despite the presence of useful data, they are really still just engaging in complaint monitoring. But, for providers who actually leverage this information, they have a second, richer source of information that can point out policy violations.
  3. Machine learning. This one becomes more complex. This is because there are a lot of different ways to take machine learning and make it into a tool that would be useful for assessment.
    1. List data. Omnivore ​(Mailchimp n.d.)​ was one of the first products on the market to look at lists as they were uploaded and try to make a determination as to whether that list would be okay to mail. The great thing about this method is that it is at least making the attempt to find problems before the abuse actually gets released onto the Internet at large.
    2. Complaint modeling. This would be taking complaints (either direct or feedback loop) and using the data that it presents to help triage and order the priority cases for follow-up and investigation.
    3. Hybrid. Just like it sounds, this takes some list data analysis and some complaint modeling and uses that to help drive a determination that helps to triage and order priority cases.

All of these methods have their drawbacks. Waiting for complaints to come in (whether directly or through feedback loop mechanisms) means that abuse has already happened. Our preference should be to try to prevent abuse. That would mean turning to more proactive solutions — which tends to lean more toward machine learning these days. But, as Laura Atkins mentioned in a blog post last year, “The problem is this is a moving target and there’s nothing set and forget about it. Algorithms like this need to be constantly maintained and trained” ​(Atkins 2019)​.

The one thing that they all have in common is that they operate in bulk. A single direct complaint is actionable not because it is a single complaint, but because it is indicative of a mass of complaints — both seen and unseen. Feedback loop measures happen on the basis of a mass of complaints. And machine learning algorithms can only happen at scale. Finally, it is certainly true that triaging complaint streams will necessitate tackling larger volumes of complaints before handling smaller, more individual cases.

Individual correction

Once a customer’s actions have brought about a mass of complaint metrics that warrant closer investigation, the matter turns from looking at masses of complaints spread over many customers to what has happened with this particular one. But, even this is an exercise in scale: Remember that “the job of policy enforcement is to limit the amount of damage done, prevent that damage from intensifying, and attempt to begin repairs to whatever damage has occurred.” And, further, that damage can be generated in one of two directions: toward the customer or toward the provider ​(Chandler 2020a)​.

Ultimately, policy enforcement has to ensure that damage does not scale beyond customer-oriented so that the actions of one (or a small group of) customer(s) to encompass all of the mail sent out by the provider as a whole. The best way to do accomplish this is by dealing with each bad customer on its own. This prevents the issue from scaling to the point where other providers feel the need to scale up their responses from customer-oriented to provider-oriented.

So, when a customer has been identified, policy enforcement agents will attempt to ascertain several things:

  1. If a breach of policy has occurred,
  2. What policy was breached,
  3. How extensive the breach is,
  4. How much reputational damage has occurred to
    1. the customer, and
    2. the company
  5. What will be required to fix the breach, and
  6. Whether the customer is willing to do the work required to come back into compliance.

Several parts of this seem to be generally intuitive. Most agents — even new ones — will handle 1-3 together as a unit and then skip to 5. But, in my opinion, ascertaining the answer to 4 will provide the surest method of getting the customer to agree to fix the breach. So, we will talk about reputational damage next time.

References

  1. Atkins, Laura. 2019. “ESPs Are Failing Recipients.” Word to the Wise Blog. June 4, 2019. https://wordtothewise.com/2019/06/esps-are-failing-recipients/.
  2. Chandler, Mickey. 2020a. “Enforcement Is Theraputic.” Spamtacular. February 3, 2020. https://www.spamtacular.com/2020/02/03/enforcement-is-therapeutic/.
  3. ———. 2020b. “Policy at Scale: Understanding the Issue.” Spamtacular. March 9, 2020. https://www.spamtacular.com/2020/03/09/policy-at-scale-understanding-the-issue/.
  4. Mailchimp. n.d. “About Omnivore.” Mailchimp. Accessed March 11, 2020. https://mailchimp.com/help/about-omnivore/.
  5. Wikiquote contributors. 2020. “Joseph Stalin.” Wikiquote. February 25, 2020. https://en.wikiquote.org/w/index.php?title=Joseph_Stalin&oldid=2747079.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.