What are the consequences of AI in the workplace?

What are the consequences of AI in the workplace?

What is generative AI and how does it work?

Ever since ChatGPT became a social meme, industries the world over have been racing to find ways to integrate the new Big Thing of our time – Generative Artificial Intelligence, a term that includes slightly more than 33% true words.

Every industry has been using some form of AI for decades now, whether it’s steering assist in vehicles or damages assessments in litigation research. However, in the last few years (marketing) geniuses like Sam Altman have held up generative AI as a quantum leap in technology, promising that (soon! Very soon!) it will be able to perform almost any human task in a fraction of the time and only 10-20 times the energy expenditure. So, what are the consequences of AI in the workplace?

The AI industry promises that any function previously thought to require human creativity, whether it be art, actuarial science, economics, law, medicine, or human relationships, will be able to be perfectly and infinitely replicated by a computer as soon as the next $100 billion round of fundraising is complete. This would be quite a boon to employers dreaming of replacing their entire workforce with a subscription to Open AI or Nvidia, as these companies have stated is their end goal, thereby ushering in the panacea of “technofeudalism” that will, at the very least, make my job as an employment lawyer relatively simple (or rather, simpler for the AI Chatbot that replaces me in a few years).

However, before employers start stocking up on pink slips, it is important to remember that generative AI cannot simply replace workers, as programs like ChatGPT do not actually think. Rather, ChatGPT is doing predictive text generation based on a pool of existing data. Data goes in, the machine analyzes it to find patterns, and when queried it spits out the most likely combination – essentially what your Nokia flip phone was doing with T9 texting back in 2007, but on a slightly larger scale. At best, these models can “simulate” human thought in the way that a shadow puppet can simulate Macbeth – poorly, riddled with hallucinations, and massively infringing on copyright laws.

  1. Constructive Dismissal

Employers considering replacing employee duties with AI should consider the risk of constructively dismissing their employees, which occurs when the employer unilaterally changes a fundamental term of the employment agreement, such as duties, title, seniority, etc.

If more and more of an employee’s job duties are replaced with AI, it may be a role reduction significant enough to constitute a constructive dismissal, which could entitle the employee to termination pay in lieu of common law notice. Additionally, using AI models to assess employee performance may also constitute such a unilateral change as to trigger a constructive dismissal.

  1. Copyright

But other considerations abound as well. Whatever material is either fed into or generated by a GAI service is technically in the possession of the company that owns that service, which can have massive privacy implications, particularly in industries subject to privacy legislation or a duty of confidentiality. Asking ChatGPT to write up a demand letter or a patient report is as good as emailing OpenAI your client’s personal – and confidential – information.

The question of ownership is even more complicated when it comes to copyright issues. Does Nvidia own that memorandum that you used NVLM to generate, or do you? And who owns all the data– approximately 12 trillion digital words worth- that these companies used to train their machines? If you ask the Plaintiffs in the 13 and counting class action lawsuits against various Generative AI companies, they would say it was the original creators of said data, which could pose a problem for customers who want to use these services.

  1. Discrimination and Torts

Some employers may choose to rely on AI to help with decision making in hiring or performance management (in other words, replacing human resources and managerial workers with robots). One of many issues is that these models are trained on historical data, and historically, a lot of hiring and performance management has been very, very discriminatory. This historical bias gets fed into these machines, which unsurprisingly spit out biased predictions – or “garbage in, garbage out”, as the phenomenon is known in computer science.

Therefore, leaving these decisions in the cold, unfeeling hands of an AI model that’s been training on a steady diet of discriminatory slop is as good as hanging a sign in front of the office reading “We promise to continue the endless cycle of structural discrimination in the workplace!” Sure, it may be quicker, but it’s generally not the kind of admission an employer wants to hand out to the lawyer of the employee who sues them for discriminatory hiring practices.

Then there the question of liability. Let’s say a company uses an AI model to make a hiring decision that turns out to be discriminatory, or perhaps to make a business decision that results in significant losses for its clients. Who is the employee or client going to sue? It won’t be the tech company that developed the model – they will wash their hands of the matter, while the company that relied on the AI model to make that decision faces liability alone.

Is there any legislation around AI?

While the government of Canada has developed some proposed regulatory framework for the development and use of AI, called the Artificial Intelligence and Data Act (AIDA), this mostly focuses on employer’s obligations to disclose their use of AI systems – it does little to address the many potential pitfalls associated with deploying these systems in the workplace.

As noted, AI adjuncts have been widely used for decades, and there are many use cases that speak to what a good model of deployment looks like. Systems that make data base queries quicker and easier are a huge benefit, as are automatic process stabilization systems. Generative AI, however, is currently much more flash than substance, and carries with it significant risks and uncertainties. Using AI systems to assist your workers and help efficiency is great – relying on those systems to replace workers is an invitation to disaster.

The use of AI in the workplace is a little like gutter bumpers in a bowling game – it’s a great way to make sure the ball hits the pins, but the game only works if a person rolls the ball.

How can Whitten and Lublin help?

AI functionality continues to grow, but no matter the hype for generative AI, whether for its present of future, employers should know what are the consequences of AI in the workplace. It’s important to understand the benefits, limitations, and, indeed, even the risks of widespread implementation. If you are an employer looking to explore the ramifications of introducing such AI programs into your workplace, or an employee seeking guidance on navigating this introduction in your own workplace, Whitten & Lublin is here to assist you. Contact us online or by phone at (416) 640-2667.

Author – Aaron Zaltzman