In 1942, science-fiction writer Isaac Asimov set out three laws of robotics, one of which says that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Artificial intelligence (AI) and robotics have been a part of science fiction for decades, but it is now coming closer to making major inroads in workplaces across Canada.
While AI and robotics are becoming workplace realities, the jury is still out on the impact they will have on the workplace. The term AI refers to the use of artificial means to replicate human thinking and predictive abilities. Machines with AI capabilities can use data and data analysis to learn, make predictions and improve their performances over time. Robots, which were initially built to carry out simple work tasks, are increasingly using AI to think and execute complex tasks.
“There is a huge advantage of using robotics to eliminate unsafe work,” says Shalaleh Rismani, chief innovation officer at Generation R Consulting in Vancouver. But questions have to be asked: Will they make people’s jobs safer or replace them? And will people whose jobs are replaced be retrained for other work?
“It seems like a lot of companies are quite excited about analytics, and I agree that there is a lot of potential there,” Rismani suggests. “But it is unclear that companies know how it is actually going to be helping them.”
The general public might already be well-acquainted with AI, which is used in self-driving cars, but applications are increasingly seen in a number of industries as well. In construction, Triax Technologies Inc. in Norwalk, Connecticut, makes the Spot-r Clip, a wearable device that improves injury-response times through detecting falls by construction workers and sending immediate email or text notifications to supervisors, including information on who fell and where, as well as the height at which the fall occurred.
Fastbrick Robotics, an Australia-based company that develops digital construction technology solutions, has developed Hadrian X, a bricklaying robot that can complete a house in two days. And United States-based DroneDeploy provides a cloud software platform for commercial drones that can survey construction sites, eliminating the need for workers to access dangerous areas.
In the transportation sector, airlines have jumped on the bandwagon by deploying AI in chatbots that respond to common passenger questions and use facial recognition to verify identities for luggage and boarding. Air carriers are using algorithms to predict passenger behaviour and reduce overbooking by analyzing historical passenger data, weather patterns and time of the day.
The growing footprint of AI also has implications for the fleet industry. Tesla’s Semi, a fully electric semi-truck equipped with second-generation, semi-autonomous technology, is scheduled to go into production next year. The company claims the technology will boost transportation safety since human error is responsible for many of the estimated 4,000 deaths annually in truck-related collisions in the United States. Tesla says the autopilot system will help avoid collisions, but drivers will still need to be alert and be prepared to take action at any time while using it.
According to Silicon Valley company Starsky Robotics, it will be possible to put drivers behind a screen in an office instead of in the cab of the Semi. In February, Starsky completed an 11-kilometre driverless trip in Florida without having a human being in the truck.
Promising and potentially far reaching as AI and robotics can be, an incident with Tesla’s semi-autonomous vehicles is a reminder that this budding technology has some ways to go. In May 2016, a man was killed in Florida when his Tesla Model S crashed and went under the trailer of an 18-wheel trailer in Williston, Florida while the autopilot was activated. It appeared that the car’s radar, camera and sensors had misinterpreted the high ride height of the trailer as an overhead road sign. Tesla explained that the trailer’s white colour against a brightly lit sky also made it difficult to see the truck.
Commenting on the 2016 Tesla incident, Adam Jarvis, vice-president of policy and research at Global Advantage Consulting Group in Ottawa, says computers that use AI will be better than humans “at most of the things we do” and help reduce the number of workplace accidents, but will not eradicate them. “That is going to be why you want those machines,” Jarvis says.
The incident also highlights the risk of over-relying on technology and letting human control and judgement take a back seat, Jarvis cautions. “Ultimately, the machines are not going to be perfect — at least not for a very long time.”
Global Advantage, which has expertise in ecosystem mapping, consulting and analysis, is examining the adoption rates of AI technology in Canada’s natural resources sector. “For our clients, AI has sort of become a buzzword in the last little while,” Jarvis says.
For natural resources companies, robotics can remove people entirely from hazardous situations. Robots are already taking over some of the most dangerous jobs, such as defusing bombs, while welding robots have replaced humans on automated assembly lines and along with that, their exposure to fumes, heat and noise.
“If you don’t have to send people into dangerous situations because you can send a robot, that keeps people a lot safer,” Jarvis says.
AI can also monitor employees who work in risky conditions by keeping track of their vital signs. “Rather than waiting for somebody to scream or [for] their heart to stop, you can see when a pattern is happening that looks like imminent danger,” Jarvis explains.
Technical Safety B.C. is an independent organization that oversees the safe installation and operation of the province’s technical systems and equipment like electrical or gas systems, new buildings and Vancouver’s SkyTrain. It is using AI to enhance the prioritization of resources by predicting with greater accuracy where high hazards can be found.
“By introducing more sophisticated machine autonomy in the risk-assessment process, we aim to find more high-hazard sites while operating under the same level of resources,” says Soyean Kim, leader, research and analytics at Technical Safety B.C. in Vancouver.
The organization began using a computer algorithm a few years ago to predict where high hazards may be and has since developed new models for the algorithm, using machine-learning computer programs that input new data and use statistical analysis to think for themselves. “The result is that we have seen it adapt even more quickly to reflect emerging risks,” Kim says.
While AI does not make the work that employees do safer, it enables safety officers to better identify areas where regulated work may be done in an unsafe manner so that risky practices can be rectified before an incident occurs. For example, tests showed that the algorithm’s prediction of high-hazard electrical sites improved by 80 per cent. Machine-learning technology also reduces the number of top-priority inspections significantly, freeing up safety officers’ time that can be used to visit other sites.
THE SCIENCE OF LANGUAGE
Apart from occupational-safety applications, AI is also making inroads into the human mind. One tool that leverages on AI to give companies insight into the mental and physiological health of their staff is Receptiviti. The development of this software service stems from the realization that large organizations are ill equipped to understand whether the work environment is healthy.
“Oftentimes, it takes an event to happen for them to realize that things weren’t as good as they hoped,” says Jonathan Kreindler, chief executive officer of Receptiviti in Toronto.
Organizations traditionally use engagement surveys to understand their workforces’ general health and assess their employment-satisfaction level. But these tools can be fraught with bias and make it challenging for employers to get an accurate picture of their workforce’s mental well-being. “Employees are scared to speak out and to share what is going on because they fear retribution,” Kreindler says.
Instead of conducting surveys, Receptiviti uses language analytics to understand the states of mind of groups of people within a company and look for indicators that they are under pressure or stressed out. “It is kind of the next generation of human capital analytics,” Kreindler says of Receptiviti, which is used by leading banks in North America and technology firms alike. “People who are under a great deal of duress use function words very differently than people who are under less duress.”
By tapping into email systems, the platform scans through communications flowing within the organization, looking at thousands of combinations of words to decipher patterns in different categories of function words.
“We are not looking at what people are saying. We are actually looking at the way people are saying what they are saying,” Kreindler explains, adding that the platform is built to analyze data in such a way that no human reads that content. “In that way, we are protecting the anonymity and confidentiality of all of that data.”
Receptiviti works only with organizations that have the permission of their employees to conduct this sort of analytics. “When you work in a bank, part of your employment contract states that you don’t own your computer, your email and you give up the rights to the content you are creating in those emails,” he clarifies.
By using AI for predictive modelling, it can help employers uncover broader trends, such as increasing stress levels in groups of people and comparing stress levels within departments. “This is not a Big Brother kind of monitoring of individuals. What we are doing, at an aggregate level, is trying to understand the health of the workforce.”
Organizations have the responsibility to be transparent about its code of ethics and the operating parameters surrounding the use of AI. “Any organization that is using such technology needs to figure out how they can be self-policing and using it in ways that it was intended,” Kreindler adds.
Companies that develop these technologies also have the responsibility to ensure that controls are in place to prevent misuse or abuse. “Artificial intelligence can be used properly or improperly, and it is a very fine line. If the technology is developed properly, ethically and responsibly, there can be a great deal of benefit,” Kreindler suggests.
THE NEW FRONTIER
Throughout history, ethical conundrums have always accompanied technological advancements, and AI is no exception. For Kishan Dial, partner of risk assurance services at PwC (PricewaterhouseCoopers) Canada in Toronto, the risks of introducing AI are generally not being managed as they should be in Canada. The main reason, he says, is a lack of understanding of the impact that changes — like the introduction of AI — have on people.
Risk in Review 2018, a report by PwC Canada, found that most Canadian organizations are investing in emerging technologies such as AI, but nearly half (46 per cent) of them see the lack of properly skilled teams as a barrier to the benefits of digital innovations. It will also take some time before AI and other technologies make a significant difference in workplaces.
According to the report, only 15 to 25 per cent of Canadian organizations say robotics, AI and intelligent-process automation will have an impact on their organizations within the next three years, while 67 per cent of Canadian chief executive officers indicate that changes in technology will disrupt their businesses in the next five years, compared to 84 per cent among respondents in the United States. “Canadian organizations are risk averse when it comes to innovation, and that includes adopting things like AI and big data,” Dial says.
Canada may be leading in fundamental research on AI, but “when it comes to companies developing the technology into commercial applications, we lag [behind] other countries by a long shot,” says Jarvis, who observes that Canadian natural resource companies typically do not want to be pioneers on the blazing trail of implementing AI. “We will be the second or maybe the third, but let someone else go first. Because when you go first, you tend to incur the most costs and most risk.”
He cites as an example the fact that Australia has been using autonomous vehicles in mining for ten years — something that Canada has yet to do. “We have the companies, we have the technology, the capacity available to us; we are just not using it.”
But Rismani, whose company explores the social and ethical challenges that AI and robotics pose to clients, says there may be good reason why Canadian companies are comparatively more conservative when it comes to adopting AI. “You don’t want your company to implement something and after that, something happens and you have a really poor reputation, or employees start quitting on you because you have changed their jobs in a way that they are not happy with.”
Jarvis is doubtful that the growing presence of AI will place workers at greater risk of harm. “There is always the odd chance that somebody makes a programming mistake and it screws up, but the point is the machine is supposed to be better than the human.”
To address the concern of malfunctioning AI as a workplace hazard, Generation R developed an AI ethics roadmap for Technical Safety B.C., highlighting issues that organizations implementing AI need to be aware of. These include the potential that algorithms could misjudge risks, or overlook the essential roles that human experience and logic play in making decisions.
“Part of the issue is a lack of full understanding of AI,” Jarvis says. “We need to exercise a certain amount of caution, but not out of fear.” He advises employers to prepare themselves for this next wave by not being afraid of it, but by hiring people who are educated about it and training employees about AI and how it can be applied at work. Adopting a flexible mindset will also work to the advantage of safety professionals and employers.
The benefits from AI will take ongoing tweaking and work, Kim says. Employers should recognize that not everyone will immediately see AI as a benefit and be aware of the need for phased-in development.
“Recent reports in the media of rogue algorithms show us that left unmonitored, machine learning can recreate and reinforce biases and cause undue harm,” Kim adds. “We see AI as a tool to augment safety officers’ expertise, logic and intuition — not as a replacement to their knowledge. When used responsibly as a tool with clear guidelines and objectives, AI should not pose any particular threats.”
For Dial, the organizations that he works with are moving into AI to reduce human error. For these firms, he recommends that they focus on the workforce of the future and invest in the right skill sets to enable the people who are going to use AI drive efficiency out of it. Organizations also need to understand the cultural changes that will be brought in as a result of adopting AI and ensure that employees are on board with this change. “Data analytics and AI need to be part of how we do business in the future,” Dial says.
Danny Kucharsky is a writer in Montreal.