LATEST NEWS:

Why employers are encouraging workers to use Artificial Intelligence

Why employers are encouraging workers to use Artificial Intelligence

"It's easier to ask for forgiveness than permission," says John, a software engineer at a financial technology company, adding that "you just move on. And if you get into trouble later, you fix it then."

He is one of many people who are using their own artificial intelligence tools at work, without permission from the IT department (which is why we are not using his full name).

According to a survey by Software AG, half of knowledge workers use personal AI tools.


The research defines a knowledge worker as “those who work primarily at a desk or computer,” it writes. bbc.

Some do this because their IT team doesn't offer artificial intelligence tools, while others said they wanted to use their favorite tools.

John's company offers GitHub Copilot for AI-powered software development, but he prefers Cursor.

"It's mostly a glorified autocompleter, but it's really good. It fills in 15 lines at a time, and then checks them to see if it's what you would have written yourself. It frees you up. It makes you feel more fluent," he claimed.

Unauthorized use doesn't violate any policy, it's just easier than risking a lengthy approval process, says John, adding that "I'm too lazy and I get paid too well to follow expense procedures."

John recommends that companies remain flexible in their choice of AI tools. “I’ve told people at work not to renew Teams licenses for a whole year because in three months the landscape changes completely. Everyone will want to do something different and they’ll feel locked in by the sunk cost.”

The recent release of DeepSeek, a freely available artificial intelligence model from China, is likely to expand AI's options even further.

What is DeepSeek?
Peter (not his real name) is a product manager at a data storage company, which offers the Google Gemini AI chatbot to its employees.

External AI tools are prohibited, but Peter uses ChatGPT through the Kagi search tool. He finds the greatest benefit of AI in challenging his own thinking when asking the chatbot to respond to plans from different customer perspectives.

"AI is not so much about giving answers, as it is about being a sparring partner. As a product manager, you have a lot of responsibility and you don't have much opportunity to openly discuss strategy. These tools enable that in an unlimited capacity," he asserted.

The version of ChatGPT he uses (4o) can analyze videos, “you can take summaries of competitors’ videos and have a whole conversation [with the AI ​​tool] about the points in the video and how they overlap with your products.”

In a 10-minute chat with ChatGPT, he can review material that would normally take two or three hours to view.

He estimates that his increased productivity is equivalent to the company gaining a third of an additional employee for free.

He's not sure why the company has banned external AI tools. "I think it's a matter of control. Companies want to have a say in the tools that employees use. It's a new frontier of IT and they just want to be conservative," he says.

The use of unauthorized AI applications is sometimes called ‘shadow AI.’ It is a more specific version of ‘shadow IT,’ which refers to the use of software or services that the IT department has not approved.

Harmonic Security helps identify shadow AI and prevent corporate data from being inappropriately fed into AI tools.

It is tracking more than 10 AI applications and has seen more than 5000 of them in use.

These include customized versions of ChatGPT and business software that has added AI features, such as the Slack communication tool.

Despite its popularity, shadow AI comes with risks.

Modern AI tools are built by processing large amounts of information, in a process called training.

About 30 percent of the applications that Harmonic Security has seen in use are trained using information entered by the user.

This means that the user's information becomes part of the AI ​​tool and may be released to other users in the future.

Companies may worry about their trade secrets being exposed by AI tool responses, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that’s unlikely. “It’s quite difficult to get the data directly from these [AI tools],” he says.

However, companies will be concerned about their data being stored in AI services that they do not control, are not aware of, and that may be vulnerable to security breaches.

It will be difficult for companies to fight against the use of AI tools, as they can be extremely beneficial, especially for younger employees.

“[AI] allows you to condense five years of experience into 30 seconds of requirements engineering,” says Simon Haighton-Williams, CEO of The Adaptavist Group, a UK-based software services group.

"It doesn't completely replace [experience], but it's a great help, just like a good encyclopedia or a calculator that allows you to do things that you wouldn't be able to do without those tools," Williams said.

What would he say to companies that discover they are using shadow AI?

“Welcome to the club. I think everyone probably does. Be patient and understand what people are using and why, then find ways to embrace and manage it, rather than just trying to stop it. You don’t want to be left behind as an organization that hasn’t [embrace AI],” he added.

Trimble provides software and hardware to manage data about the built environment. To help employees use AI safely, the company created Trimble Assistant – an internal AI tool based on the same models used in ChatGPT.

Employees can consult Trimble Assistant for a wide range of applications, including product development, customer support and market research. For software developers, the company offers GitHub Copilot.

Karoliina Torttila, director of AI at Trimble, says that “I encourage everyone to explore any kind of tool in their personal life, but they should understand that professional life is a different space, with some important safety measures and considerations.”

The company encourages employees to explore new AI models and applications online.

“This brings us to a skill that we all have to develop: We need to understand what is sensitive information,” she says, adding that “there are places where you wouldn’t put your medical information, and you need to be able to make the same assessments for work data.”

She believes that employees' experiences with AI in their personal lives and projects can help shape company policies as AI tools evolve.

"There needs to be an ongoing dialogue to understand which tools serve us best," she says. /Telegraph/