While everyone seems to look towards "Best of..." lists, I've come to the conclusion the "Worst of...." lists are probably a lot more useful. For example, you can argue the merits in some cases of which should be ranked first of the top 3 or so out of 6 or more entrants. But chances are any of those top 3 will probably do the job at least well enough.
But being at the bottom of the list, or considered mostly a "fails advertised expectations" is pretty bad and a reliable indicator to stay clear.
to the conclusion the "Worst of...." lists are probably a lot more useful.
Also, 'worst' lists are much rarer since most media (e.g., trade magazines) and industry organizations (e.g., SIA and ASIS) are largely paid for by the companies making these products and therefore have heavy incentives not to be critical.
Yes, technically the review was posted on Jan 11, two days after the “Worst Tested in Past Year” articles, however unless all the testing took place on the 10th, (which seems unlikely considering the tech support interactions), the product was actually tested and the abject failure was known to IPVM in the days prior to the closing of the awards.
Therefore, this review should be moved from Bad: to Worst.