An AI-driven company that offered automated risk screenings of babysitters has suspended operations after complaints about the service surfaced.

The Washington Post reported that Predictim, a startup that relied on algorithms to judge the fitness of individual baby sitters based on their social media posts, has stopped all scans and is putting its launch of full operations on indefinite hold. Predictim added that refunds will be given to existing customers

Drew Harwell filed this report on Predictim in the Washington Post:

Company executives, who did not respond to request for comment early Friday, had contended the social-media-scanning service was a critical tool for helping parents stop risky or “born evil” babysitters before they could get close to their kids. They had previously argued that the service had controls against bias and privacy risks, and that public criticism was misguided.

The Post spoke with several parents who said the personality screenings had influenced their thinking about a babysitter’s character, as well as a babysitter who was stunned to learn that the automated system had flagged her as an elevated risk for bullying and disrespect.

Facebook, Twitter and Instagram blocked much of the service’s access in recent weeks, saying its social-media scans had violated rules on user surveillance and data privacy. Company executives said late last month that they were undeterred by the restrictions and intended to begin incorporating even more data, such as babysitters’ blog posts, into its analyses.

Kate Crawford, a researcher and co-founder of the AI Now Institute, called the service “error-prone, based on broken assumptions, and privacy invading. What’s worse — it’s a horrifying symptom of the growing power asymmetry between employers and job seekers. And low wage workers don’t get to opt out.”