
Misinformation and disinformation are increasingly becoming a cybersecurity problem.
❓What are some of the biggest challenges and impacts of this threat vis-a-vis cybersecurity?
❓What actions or investment should be prioritised by organisations to stay ahead of (AI-powered) disinformation threats?
These were some of the questions I posed to the power-packed panel at the FutureCISO conference held on 7 May in Singapore.
Some key points to emerge from the discussion included:
💡The tangible impacts of disinformation on brand reputation and the resultant financial consequences, based on real-life incidents.
💡Unsurprisingly, AI figured heavily in the conversation. Using AI to fight AI-fuelled misinformation is still in the early stages, and guardrails need to be established for ethical AI use, to balance innovation with security. Such governance frameworks are being built in real time.
💡Companies who are still adopting AI can learn from more early adopters wrt shadow AI and other AI-related problematic issues.
💡To build digital trust and reduce misinformation, CISOs and other security leaders will need to work with the rest of the company to integrate awareness and a security-aware culture.
💡 Interestingly, the audience poll prior to the start of the panel showed that majority believed training and awareness was the area of mis/disinformation security that required most investment in their organisation.
This ties in with conversations during the conference about the importance of a security-aware culture in organisations, and the role of soft (or what I call human) skills – within cybersecurity teams as well as among employees and customers in general. It was heartening to see the human security elements highlighted during the conference. Imho, while regulation, tech, and processes play important roles, it is the human element that will be the defining factor.
Comments are closed