The Indonesian Express
Cybersecurity firm Ensign InfoSecurity stated that while artificial intelligence (AI) can be helpful in our work, there is a risk of data leaks due to the rapid adoption of this technology. "So, everything uses AI. We input data there, and it's certainly useful because it helps a lot with our work, but there's a risk of data leaks," said Adithya Nugraputra, Head of Consulting at Ensign InfoSecurity, during the presentation of Ensign InfoSecurity's 2025 Cyber Threat Landscape Report in South Jakarta on Wednesday. To prevent potential data leaks, he emphasized the importance of maintaining data security. He said companies utilizing AI need to implement clear data classification to determine which data should be used and which should not be shared. He recommended companies use trusted, security-vetted large language models (LLMs). "Because they only want real analysis, and now LLMs can also be installed in local environments that are not connected to the internet," he said. Furthermore, companies are advised to adopt security systems capable of filtering content before it is uploaded to public services like ChatGPT to ensure no sensitive data is leaked. Adithya emphasized the importance of clear internal policies and guidelines regarding the use of AI in the workplace. These guidelines should regulate what tools may be used, how they should be used, and what data should not be shared. He cautioned that prohibiting the use of AI without the right approach could be ineffective. Even if a company prohibits access to AI technology, employees can still access it via personal devices like mobile phones. "Because they use it to make their work run more smoothly, and it also helps their business. So we must respect this. We can't just keep prohibiting it, because if we do, it will actually hinder business," he said.