Excoriating report which condemns AI powered assessment tech as no more than a modern form of phrenology. The proof was
successful attempts to game a purpose built AI assessment tool (have a go yourself
here), as well as decent philosophical challenge on the current requirement to use historical training data, which replicates how human bias emerges. It’s a bit of hit job on AI which does not convincingly provide proofs, but does outline the legitimate challenges assessment tech vendors have in building tooling sophisticated enough to handle these cases.
Worth a read. H/T to brainfooder
Andrew Gadomski for the share in the
fb group.