Action List

Nadia Sigurd
Message internal champions within Reddit etc Create list of VIPs to reach out to
Alex Hope - Conversation ) Any time on Friday week of the 30th Read about Digital Services Act

Competitor Analysis

Target Audience Analysis


Known Unknowns

  1. Tech Spec: As part of our technical specifications, we'll need to determine what technologies and methodologies we'll use to implement facial recognition, NFT creation and verification, data security, and integrations with video platforms. We'll also need to think about system architecture, scaling, and technical resource requirements.

    Chat GPT's Suggestions…

    Embed tool in the process

    Machine learning using siamese neural networks is a documented good way of image recognition, already used by the likes of Google, Meta and Tinder. Has a large corpus of research proving its efficacy. The bigger hurdle is effective use at scale for external platforms, where API calls are expensive and can increase costs for large companies. Using a check at the point of upload is less expensive computationally, but machine learning on video requires frame-by-frame testing. This might necessitate the development of more effective algorithms or methods for testing large video files.

    (Solution, not part of the problem) NFTs are a large, unknown aspect of the project, and can prove to be difficult to implement cross-platform. We would in such a case implement a digital token attached to any video or photo uploaded to partnering websites. A decentralised platform would store the data, whereby matches would continuously be checked between all media.

    Perhaps NFTs are not a necessary part of our product. They are computationally expensive to setup and requires a deeper commitment from our partners. A simple API call might be better.

    Looking at how Googlebot works could be an interesting way forward for a broader focus across the web. This might be an issue with a general trend whereby companies are locking down their websites to combat crawling (Twitter, Reddit, etc.).

  2. Policy: We need to understand the regulations around data privacy, biometrics, blockchain, and NFTs in all the jurisdictions where we plan to operate. Policies such as GDPR in Europe, CCPA in California, and PIPEDA in Canada, among others, may impact how we handle data and conduct business.

    1. Australia: There is a statement…. + The Albanese government is considering a ban on “high-risk” uses of artificial intelligence and automated decision-making, warning of potential harms including the creation of deepfakes and algorithmic bias.
    2. EU: The EU has taken a proactive approach to deepfake regulation, calling for increased research into deepfake detection and prevention, as well as regulations that would require clear labeling of artificially generated content. The most relevant European deepfake policy trajectories and regulatory frameworks are:
      1. General Data Protection Regulation (GDPR): Mandates transparency about how personal data is used, requires consent for data use, and gives individuals control over their personal data.
      2. Copyright Regime: The EU's Copyright Directive aims to ensure fair remuneration for creators, a broader access to content for consumers, and clarity of rules for platforms. It notably includes a requirement for platforms to prevent the upload of copyrighted material.
        1. New angle - pirated / illegal content + videos e.g. films??
      3. e-Commerce Directive: The e-Commerce Directive provides rules for online services in the EU. It establishes legal responsibility for online content, including a 'safe harbour' provision for hosting providers, but it's under review as part of the Digital Services Act.
      4. Digital Services Act (DSA): The DSA imposes new mechanisms allowing users to flag illegal content online, and for platforms to cooperate with specialised ‘trusted flaggers' to identify and remove illegal content. They apply in the EU single market, without discrimination, including to those online intermediaries established outside of the European Union that offer their services in the single market.
      5. Audio Visual Media Directive: This directive governs EU-wide coordination of national legislation on all audiovisual media. It sets rules to ensure that viewers are not exposed to hate speech, violence or harmful advertising.
      6. Democracy Action Plan: This EU initiative aims to protect democratic systems by promoting free and fair elections, strengthening media freedom, and tackling disinformation.
    3. China: Mandates individuals and organisations to disclose when they have used deepfake technology in videos and other media. The regulations also prohibit the distribution of deepfakes without a clear disclaimer that the content has been artificially generated including watermarks etc.
      1. Recently established provisions for deepfake providers, in effect as of 10 January 2023, through the Cyberspace Administration of China (CAC). The contents of this law affect both providers and users of deepfake technology and establish procedures throughout the lifecycle of the technology from creation to distribution.
        1. Require companies and people that use deep synthesis to create, duplicate, publish, or transfer information to obtain consent, verify identities, register records with the government, report illegal deepfakes, offer recourse mechanisms, provide watermark disclaimers, and more.
    4. U.S.: Mostly state-specific
      1. California made it illegal to distribute deepfakes of political candidates within 60 days of an election through AB 730, a law that sunsetted on 1 January 2023. Around the same time, California passed AB 602 banning pornographic deepfakes made without consent.
      2. New York also has a deepfake law S5959D passed in 2021, with potential fines, jail time, and civil penalties for unlawful dissemination or publication of a sexually explicit depiction of an individual.
      3. Virginia, Texas etc… mostly around sexually explicit deepfakes
      4. On the federal level, the DEEP FAKES Accountability Act, introduced in 2019, seeks to require deepfake creators to disclose their use, prevent the distribution of deepfakes intended to deceive viewers during an election or harm an individual's reputation, and set potential fines and imprisonment for violators.
  3. Current Solutions: Who are our competitors? What solutions do they offer and where are the gaps? It's also crucial to understand the strengths and weaknesses of their approaches.

    https://thesentinel.ai/

    https://oaro.net/product/oaro-identity/

    https://sensity.ai/

    https://www.biometricupdate.com/202102/yoti-partners-with-mindgeek-for-biometric-checks-to-stop-online-exploitation

    https://www.yoti.com/

    https://www.duckduckgoose.ai/

  4. Scale/Scope of Problem: How widespread is the problem we're addressing? How often do individuals have their identities used without consent in videos? And which demographics or professions (celebrities, influencers, etc.) are most affected? 1.

  5. Target Audience/User: Who will use our product? Are they individual users, corporations, or both? What are their needs and expectations?

    1. Beach head: Online video platforms with incentives for removing deepfakes, RP, etc. Preferably public.
    2. Users, female, usually younger with a large digital footprint, victim of deepfake/RP.
      1. Secondary: Celebrity status, important status to withhold, large online following
    3. Government: European countries with a large emphasis on governance to control online use (France, Germany, UK, Denmark, Finland). Much to learn
  6. Market Appetite: Is there a demand for a service like ours? What are the potential market size and growth trends? Surveys, market research, and competitor analysis can help us gauge this. 1.

  7. Data Security Issues: Handling sensitive biometric data brings significant security challenges. We'll need to understand these challenges and develop strategies to protect user data and maintain trust.

    1. If NFT, who is responsible for data leakage?
    2. How to responsible store huge amounts of very personal and sensitive information securely?
    3. We would be especially targeted for attacks, i.e. Ashley Madison.
    4. We would have to implement a lot of external tools and APIs. How can we ensure all these external partners are secure? What can we do to avoid data leakage when our partners are compromised?

Unknown Knowns

  1. Industry: What is the broader industry context for our service? This could include trends in video content creation, digital privacy, and blockchain technology.