Publishers love header bidding, and DSPs hate it. Where once there was a single SSP bidding into their inventory, now there are many - concurrently. That said, SSPs generally don’t represent unique demand (for commoditized (banner) inventory - some SSPs do, though - notably Google, AOL and AppNexus, which have their own DSPs that bias towards their own inventory). The demand lies with the DSPs that are generally plugged into every SSP. DSPs use some element of randomness in their bidding algorithms, either for pacing or to learn the prices at which publisher inventory tends to clear or perform for a given campaign. DSPs now see the same impression from multiple sources and effectively are forced to choose a random number a dozen times instead of a single time. When you choose a random number, say from 1-10, the expected value is 5.5. The expected value of choosing the maximum of two random numbers, each independently selected from 1-10, is 7. The more trials, the higher the expected outcome gets. This correlates indirectly to DSPs having a higher expected maximum bid across multiple inventory sources for the same impression than if they only bid on that impression a single time.
Further, imagine a DSP that has significant scale in the market. On every single impression, the DSP analyzes every single campaign to determine whether its targeting permits it to bid on that impression, and if so, how much to bid (plus storing the data for the impressions, processing it, etc). If there are tens of thousands of campaigns, this becomes a significant cost for the DSP. If it’s the same impression over and over, the DSP is incurring a material increase to its hardware costs to process all these bids, and also, as discussed above, yielding maximum bids that are likely higher than if it only saw that impression once. This undermines the DSP in two different ways - it has to pay more to see the same amount of inventory and its performance is worse for its customers.
DSPs are now presented with the option to reach publisher inventory through a number of different channels. For years, the DSPs have been complaining about all sorts of SSP activities that deviate from a pure 2nd price auction - every exchange does all sorts of things. The most egregious are things like cascading floors (differential floors based on the price you bid - meaning a poor man’s first price auction masquerading as a second price auction), DSP floor groups (different DSPs have different floors based on their groups, so it’s more expensive for some buyers through certain sources), etc. All this has yielded a world where DSPs have the incentive to perform supply path optimization. This means DSPs will start to look at, for any given publisher, what is the least expensive means to reach that publisher. An SSP that has a higher fee will be more expensive than an SSP that has a lower fee, all things considered. An SSP that purports to be a second price auction and approximates a first price auction (meaning the DSP uses second price optimization but doesn’t benefit from it) will also be more expensive. Thus DSPs can review the cost of acquiring inventory and the performance of the inventory acquired from each source and determine which is the best route to take. It will then shut off or materially bias against the other sources. This means that a DSP will stop bidding against itself and will stop processing duplicate inventory.
It is likely not the case that every DSP will optimize the supply path to the same SSP. This means that for publisher A, DSP 1 may choose SSP X, whereas DSP 2 may choose SSP Y, based on their internal optimization strategies. On publisher B, DSP 1 may choose SSP Y and DSP 2 may choose SSP X. Alternatively, DSP 1 may (but probably won’t) choose SSP X for all inventory, and only SSP X. If this plays out as described above, publishers will continue to work with multiple SSPs, but will see less or no uplift from adding more than a few partners. It also means that competition over margins may exist for some period of time, but then as optimization paths are selected, reach some sort of steady state - and may actually increase again if the SSPs determine that they have been selected for a publisher and there is nominal change in the demand based on the margin they take. This outcome assumes a relatively fixed optimization structure, where the DSP chooses an SSP for a publisher and basically only listens to that source. It may also be the case that the DSPs limit themselves to a single source, but sample impressions from other channels on an ongoing basis, continuing to conduct supply path optimization to publisher inventory based on continuous A/B testing. This would required continued analysis but would also ensure that competition continues for margins and auction dynamics.
The point is that supply path optimization may have an interesting effect of reducing the upside for publishers from header bidding, and reducing the competitive pressure that SSPs are exerting on each other - once they reach a certain detente. This is an interesting and somewhat counterintuitive outcome, and it’s potentially not correct. Nonetheless, it’s interesting to keep thinking about what will happen as DSPs take power back in the header bidding context.