In today’s rapidly evolving digital landscape, organizations are increasingly reliant on artificial intelligence (AI) to enhance operational efficiency, drive innovation, and maintain competitive edges. However, the integration of AI into business processes brings with it a unique set of challenges and risks, particularly in terms of security and productivity. A recent article on Help Net Security discusses these challenges, highlighting the delicate balance between employee security and productivity that organizations must manage when adopting AI technologies Help Net Security.
The Misuse of AI by Employees
A concerning trend highlighted in a recent Help Net Security article reveals that 22 percent of employees are using AI technologies in ways that contravene their company policies Help Net Security. This statistic not only underscores the challenges of governing technology use within organizations but also emphasizes the potential security risks associated with unsanctioned AI applications.
This misuse can range from the benign, such as using AI tools for unapproved tasks, to serious security infractions, including the handling of sensitive data in ways that could lead to breaches or leaks. The issue highlights a critical gap in training and awareness among employees about the appropriate use of AI technologies and the risks associated with their misuse.
We all know exactly why this is happening, “Look this brand new unvetted tool will do 90% of my work in 10 min … why wouldn’t I use it?”
An ever increasing demand to be more productive and the desire to automate various tasks make all of us prone to using tools we don’t really understand, haven’t gotten cleared or security tested.
The Risks of AI in Organizations
Risks range from data breaches and misuse of AI capabilities to more insidious issues like bias in AI algorithms, which can perpetuate discrimination and impact decision-making processes. Furthermore, AI systems can be manipulated or exploited by malicious actors, leading to severe consequences for both internal and external stakeholders.
Unlike other software you may be using, AI has a unique draw that encourages us to provide far more trust than we would with other new technologies.
For instance, an employee might provide AI systems with internal financial & personnel data to generate spreadsheets or graphs, placing a significant amount of trust in the security and confidentiality of these AI tools. This is risky because AI tools are new, relatively untested and can actually be used to breach themselves.
In March 2023, a bug in the ChatGPT’s source code led to the exposure of sensitive data of ChatGPT Plus subscribers during a nine-hour window. This breach, attributed to a vulnerability in the Redis memory database used by OpenAI, compromised personal information such as names, email addresses, payment details, and the last four digits of credit card numbers of approximately 1.2% of ChatGPT Plus subscribers active at the time.
In another instance from Feb of this year, ChatGPT was found to be leaking: user credentials, chat history and more.
The Risks of AI-Cloned Faces and Voices
As I referenced in this post, AI now allows for the cloning of individuals' faces and voices, posing significant security and privacy risks. The ability to mimic someone's identity can lead to more sophisticated forms of identity theft, potentially undermining biometric security measures.
Creating deep fakes can now be done with a single picture and cloning someones voice only requires a few minutes of that person speaking. In order to create a convincing facial overlay of the victims face onto yours, you only need a few minutes of HD (ideally) video of that person.
The above video is an example of Microsofts VASA-1, creating a deep fake video from a single image.
These are new threats that must forever more be taken into consideration when we perform professional and personal risk assessment to our identity.
NSA's Guidelines for Safe AI Deployment
In response to these challenges, the National Security Agency (NSA) has established a set of guidelines for the safe deployment of AI in the defense sector, which can serve as a model for other sectors as well. These guidelines, detailed in a report by Clearance Jobs, emphasize a structured approach to AI deployment that includes risk assessment, ethical considerations, and continuous monitoring Clearance Jobs.
Key aspects of the NSA’s approach include:
Risk Assessment: Thorough evaluation of potential security risks at each stage of AI system development and deployment.
Ethical AI Use: Ensuring that AI applications adhere to ethical guidelines that prevent misuse and bias, thus maintaining trust and integrity.
Training and Awareness: Educating employees about the capabilities and limitations of AI, fostering a workforce that can collaborate effectively with AI systems.
Continuous Monitoring and Evaluation: Regularly assessing the performance and security of AI systems to identify and mitigate any emerging threats or failures.
While the above guide lines from the NSA are a step in the right direction, I also find them woefully inadequate to actually protect individuals.
Conclusion
The integration of AI into organizational workflows presents both opportunities and significant risks. These risks are a new type of security risk and in my opinion, organizations are not giving these the consideration they demand.
Training Resources:
For individuals looking for a hands on training that includes all of the above topics, Covert Access Team (covertaccessteam.com) provides training courses focused on physical penetration testing, lockpicking, bypassing techniques, social engineering and other essential skills.
Covert Access Training - 5 day hands on course designed to train individuals and groups to become Covert Entry Specialists
Physical Audit Training - 2 days of intensive physical security training focused on enhancing facility defenses and bolstering security measures against attackers
Elicitation Toolbox Course - 2 day course that focuses on elicitation and social engineering as critical aspects of Black Teaming
Counter Elicitation Course - 2 day course that teaches how to identify and protect from elicitation tactics aimed at extracting confidential information.
Cyber Bootcamp for Black Teams - 2 day course designed explicitly for physical penetration testers who need vital cyber skills to add to their toolbox.
Private Instruction - Focused learning & training based on your needs .