I strongly disagree with your point, and I do think it relies on fallacies ...
I will try to explain better. We have a function f(h) -> v, where h is lighthouse height, v is visible distance, and we are trying to test ‘how well’ the function predicts the visible range given by observers.
Suppose that in scenario 1, our function returns a random number uniformly distributed between 0 and 45 miles, where 45 miles is the maximum range that any light can be seen due to atmospheric conditions. I think you agree that would not be a very useful function. It predicts nothing, because it is random, and that was my point earlier about randomness.
In scenario 2, by contrast, suppose that the maximum difference either side is 5 miles, that 80% of the observations are within 2 miles of the amount predicted by our function, and that 70% are within 1 mile.
Then we don’t need adjectives like ‘good’ or ‘strong’. We don’t even have to know how the function works, or whether it is consistent with round earth or whatnot. We just know that, given the height, there is an 80% chance of being within 2 miles of the observation, and a 70% chance of being within 1 mile.
Does that work better for you?