The success of an artificial intelligence (AI) algorithm depends in large part upon trust, yet many AI technologies function as opaque ‘black boxes.’ Indeed, some are intentionally designed that way. This charts a mistaken course.
Trust in AI is engendered through transparency, reliability and explainability. In order to achieve those ends, an AI application must be trained on data of sufficient variety, volume and verifiability. Given the criticality of these factors, it is unsurprising that regulatory and enforcement agencies afford particular attention to whether personally-identifiable information (“PII”) has been collected and employed appropriately in the development of AI.