GRC Top Takeaways from Lawmakers’ Proposed TikTok Ban

By Aric K Perminter

The White House and TikTok’s critics in Congress have made it clear, they consider TikTok a dangerous social media app and national security threat. Throughout the March 23, 2023, congressional hearing, TikTok CEO Shou Zi Chew defended the company against these charges. Still, lawmakers strongly support a full ban on the popular short-video app, owned by ByteDance, a Chinese company. Rep. Tony Cárdenas (D-Ca.) cited “life and death” issues connected to the app, which has 150 million U.S. users.

Due to national security concerns, over a dozen countries have introduced full, partial, or public sector bans on TikTok. Most proposed bans target the public sector or government devices, but a growing number of private companies are blocking the app as the U.S. considers banning TikTok if Chinese owners don’t sell the U.S. version of the app.

CIOs, CISOs, and risk professionals are well aware of social media data security and privacy issues. But the TikTok debate and its possible outcomes have wider-reaching governance, risk, and compliance (GRC) implications. The marriage of data collection with automated, AI-enabled applications could usher in an era of devastating cybersecurity incidents that spread with unprecedented speed using adaptive social-engineering intelligence.

Ultimately, spying through the collection of TikTok data isn’t the primary concern of governments. Rather, it’s the ability to use that data paired with artificial intelligence to spread disinformation, create a movement or control the thinking of users.

Social media’s role in the 2016 U.S. presidential election made headlines as people sought to understand the full impact of fake news, polarizing filter bubbles, and Russian propaganda campaigns executed across social media platforms.

In 2018, numerous Silicon Valley tech execs publicly claimed that social media harms humanity. Chamath Palihapitiya, a former Facebook vice president, pronounced that social media is “ripping apart the social fabric of how society works.” Sean Parker, who served as Facebook’s first president, warned that social media “exploit[s] a vulnerability in human psychology” to turn children into addicts and interfere with productivity.

If these proclamations didn’t serve as a wake-up call to U.S. companies, lawmakers’ argument that TikTok poses a national security threat that outweighs the wishes of the millions of people and businesses that use it, should. Pairing innovative AI technologies, like ChatGPT, with virtual reality (VR) gives threat actors the power to predict user behaviors and exploit human psychology—at speeds that could make it impossible for the targets of disinformation and deepfake campaigns to distinguish what is real.

The proposed banning of the most popular smartphone app in the country foreshadows a tsunami of new AI apps that could be introduced into the workplace without any governance or risk controls. As it exists today, there is no standard approach or methodology for deploying AI within an enterprise. Unlike a SaaS-based application that’s housed in a managed environment, like AWS, and comes with cloud security controls, AI sits on top of that tech stack—so it’s boundaryless.

Business executives tend to prioritize productivity improvements over cybersecurity concerns, but automation is inherent in AI. An AI-enabled application can control its programming and has autonomy over its interactions with users. If AI is developed without cybersecurity guide rails, the opportunity to control it may be lost forever.

Before the AI wave crashes and creates irreparable damage, CISOs and risk managers need to quickly get AI controls in place. Then start thinking about the use cases that will deliver the greatest business value.

The high-stakes cybersecurity implications of AI, ChatGPT, and TikTok have a lot of folks racing to promote and harmonize best practices, standards, and frameworks for AI and related technologies. Cybersecurity professionals can use these resources to build their AI governance and risk management programs.

  • The Holistic Information Security Practitioner Institute (HISPI) is an independent training, education, and certification 501(c)(3) non-profit organization that is working to crowdsource and open-source AI governance standards that are suitable for highly regulated organizations.
  • In collaboration with private and public sectors, NIST has developed the NIST AI Risk Management Framework (AI RMF) to help organizations incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. NIST’s AI taxonomy helps simplify the categorization of AI lifecycle risks so that stakeholders may better recognize and manage them.
  • Individuals from groups like AI Squared and Forward Edge-AI are helping companies adopt and integrate AI correctly. AI Squared’s web browser code initiates an AI work process or workflow based on approved enterprise use cases. This allows businesses to quickly scale AI without building an entire ecosystem to run it. Forward Edge-AI’s tools are helping under-resourced security operations stay ahead of threats.

AI will undoubtedly make our lives and work streams more productive. Businesses will derive value and achieve performance metric improvements through AI. But the boundaryless nature of AI means that threat actors will be able to identify and exploit the weaknesses within an organization’s security controls faster.

GRC program leaders should take heed of the proposed TikTok ban. Act now before it is too late. Get started with a strong AI governance framework. Build the framework using a standard taxonomy that helps all stakeholders understand and control AI risks. Then make sure employees stay within the guide rails. Outsource the task if the project is too big or falls too far outside your team’s domain of expertise. Taking action now is the only way to make sure that AI serves the business securely, safely, and correctly, today and in the future.

Aric K. Perminter is Founder, Chairman and CEO of Lynx Technology Partners, a trusted governance, risk and compliance (GRC) managed service partner of a growing list of customers in highly regulated industries worldwide. Respected for his altruism and visionary leadership, Mr. Perminter has helped hundreds of companies achieve a strong cybersecurity stance and high performance throughout his 25-year career. He is the second member and shareholder of THREAT STREAM, an investor in Security Current and CloudeAssurance, and serves on the executive boards of BCT Partners, Cyversity, and Cyware.