A recommender has many technical descriptions — collaborative filter, content model, contextual bandit, sequence learner, retrieval-then-rerank pipeline. None of those say what the recommender is doing in the world.

Viewed from the outside, a recommender is a hypothesis. Given what is known about this person, in this context, at this moment, here is what they should attend to next.

Holding that framing steady surfaces three things the engineering language can hide.

First, the system is making a claim, not just responding to data. “Here is what you should attend to next” is normative even when the system is optimising a proxy metric like click-through or watch time. The metric stands in for the hypothesis; it is not the hypothesis.

Second, the stakes are attention, not taste. Taste-matching is the most generous reading. A harder reading, truer to how these systems are deployed, is that they shape what someone notices within a space of possible things to notice. That is a heavier claim to defend than a lighter one about preference.

Third, the evaluation question changes. The question is not did the user engage. It is was the hypothesis defensible given what was actually known about the person and the situation. Engagement measures compliance with the hypothesis, not the quality of it.

Most deployed recommenders are parameter-level adaptive in the sense set out in Three Senses of Adaptive. The functional form is fixed; the operating point moves under feedback. The hypothesis is being refined, not reconsidered. Whether refinement is enough depends on how well the initial hypothesis was stated.

What this note is not

  • Not a technical definition of recommender systems.
  • Not a claim that recommenders are bad or good.
  • Not a claim about attention economies.
  • Not a claim that every recommender should be evaluated this way. It is a framing, not a mandate.

Sources

  • Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen, and John T. Riedl. “Evaluating Collaborative Filtering Recommender Systems.” ACM Transactions on Information Systems 22(1):5–53, 2004. DOI: 10.1145/963770.963772.
  • Tobias Schnabel, Adith Swaminathan, Ashudeep Singh, Navin Chandak, and Thorsten Joachims. “Recommendations as Treatments: Debiasing Learning and Evaluation.” Proceedings of the 33rd International Conference on Machine Learning (ICML), PMLR 48:1670–1679, 2016. PMLR; arXiv:1602.05352.