This is conflating security through obscurity with preventing ‘gaming’ an algorithm (although the line is blurry).
Going back to the actual security example, Security Through Obscurity would be a router manufacturer relying on a proprietary, and secret, crypto algorithm. This is bad for all the reasons that the OP brings up.
However, while it is best if a router uses only publicly vetted crypto algorithms, there is no advantage for them to disclose which such algorithm they are using, as that is giving an attacker additional information for free.
Furthermore, it is actually a good idea to obscure which algorithms are being used, if it is possible to do so without compromising the strength of the implementation (for example, adding some delay jitter to response times). This is also not security through obscurity, it is defense in depth.
There are many scenarios where this sort of thing makes sense, and many places where misaligned incentives create the unlikeliest attackers (eg. teachers, or even whole school districts, ‘teaching to the test’).
With email spam detection, we indeed have the proper set of circumstances to treat the algorithms as if they were crypto, so Bayesian filters and other criteria can be (relatively) openly disclosed, even though we have no idea exactly which algorithms are being used by GMail for this purpose.
With content recommendation systems, while we should in theory have assurances that the black boxes are only using algorithms that don’t insert biases, there is absolutely no reason for disclosing which known-good recommendation algorithms are being used. And that is even sidestepping the issue of recommender systems that are too good, leading to filter-bubbles and epistemic closure.
Further, given the increased use of machine learning for these purposes, even fully disclosing the exact algorithm wouldn’t help (spammers or the public alike) very much as you also would need a copy of the training corpora and all hyperparameters for evaluation.
For human content curation, we have various standards like disclosure of potential conflicts of interest, and “Chinese walls”, and I believe that is the direction we are going to have to pursue for many of these algorithmic systems, rather than uselessly insisting that all details of the deployed systems be open to examination.
Of course, deciding which systems need what sort of disclosures is itself a rich area for policy discussion and disagreement, but the point is that painting any and all hiding of implementation details as “security through obscurity” isn’t helping.