Name: Thatsmyface

What we do:

Thatsmyface enables the world's most loved platforms to tokenise and protect the facial data of their users against deepfake technology. Our plug-in solution helps companies automate their flagging system for malicious content and offer in-platform recourse mechanisms for their users. It also ensures legal compliance, by keeping up to date with changes in regulation across the globe.

How do we do it (tech):

Traditionally, deepfake mitigation has been spearheaded smart detection technologies. But these technologies are, and forever will be, outpaced by deepfake creators. We have to admit it - very soon, deepfakes will be entirely undetectable.

That's why Thatsmyface's approach to deepfake mitigation is different. Our facial-recognition technology tokenises each user's facial data, and uses sophisticated data crawling to notify the users when their faces has been used. The user then gets notified and can do one of three things: verify the face as their own, as not their own, or report it to be taken down.

The problem:

  1. Tech is outpacing our ethics, fast. As much as 90 percent of online content could be synthetically generated within a few years. Deepfake technology has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.
  2. No one can trust anything anymore. The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality,"(Europol). The worst part is that deepfakes are rapidly eroding trust in key public institutions, such as the court (A doctored recording of a parent abusing their child was submitted and accepted as evidence in a child custody case in 2019).
  3. Organisations risk serious legal and financial liabilities. Knowing these risks, policy makers are rapidly introducing new policies. The EU introduced the European deepfake policy, China recently established provisions that require companies to offer recourse mechanisms for deepfake content . These are sweeping changes, which can result in serious liabilities, such as civil and class action lawsuits, heavy fines, ban on operations and loss of consumer trust.

Target market:

Our target market corresponds to the distinct growth stages of our business. Phase 1 - Tech companies facing serious legal repercussions if they don't take steps to mitigate irresponsible AI (e.g. Reddit, TikTok and Pornhub) / High-profile individuals or victims of doctored content who need their biometric data to be protected. Phase 2 - State and Local governments fighting the malicious use of biometric data in the digital space. 3 - Federal governments and international bodies looking to regulate identity doctoring and irresponsible AI / Architects of Web 3.0 building the new web on blockchain who need digital passports that correspond with their Non-Fungible mechanisms.