Dr Vivienne Ming is a world-renowned theoretical neuroscientist known for her amazing work developing and using artificial intelligence (AI). A tech-hacker, inventor and innovator, Vivienne uses her knowledge to enact positive change – such as by using AI to challenge norms and improve outcomes around mental wellbeing, diversity, and workplace development.
We were lucky enough to spend an hour with Vivienne talking about the mind-boggling potential impacts of AI on the world of HR. Yet before we can properly delve into this topic, we must first understand what artificial intelligence is. For many of us, the first thing that may spring to mind is a dystopian Hollywood depiction – like the evil sentient AI ‘Skynet’ from the Terminator movies. This is far from the current reality: ‘AI is just a tool. It is not an intelligent being,’ Vivienne assures us. Rather, it’s when a computer or robot can perform tasks that are usually reserved for intelligent beings; for example, identifying images or patterns and trends in data.
Done right, the possibilities are endless.
The opportunities
‘When we use AI, we can see so much of what’s going on in an organisation, yet still treat people as individuals,’ Vivienne enthuses. ‘If I want to build a team, I can fit people together through complementary diversity. There’s no way a person in HR could truly get to know all the individuals of an organisation, but it is possible for these systems to know two people would work well together, even though they’ve never met.’
An initiative like this involves significant initial investment as a large amount of multi-dimensional data is required. For instance, Vivienne collected information on staff members’ previous work experience, cultural upbringing and socio-economic background, as well as conducting psychometric testing and more. This data was then manipulated using specific algorithms to identify trends and other key findings.
So armed, the AI Vivienne built to leverage complementary diversity consistently created teams that outperformed those put together by traditional methods. ‘When you look at complementary diversity, you find that those teams come up with new solutions to problems more quickly and more frequently. Using AI as a tool in finding good new connections is important,’ she says. ‘Take a team that’s worked together for a long time – say a highly prolific scientific team that writes major impactful papers. In time their publication rate slowly increases, but the uniqueness of their work goes down. But the moment a new person joins with complementary diversity to the others, their “impactfulness” jumps right up again.’
‘Imagine doing that for the Australian Government. It’s about mixing and remixing. You don’t want to disrupt a high-performing team, but if you shift the right person out and bring the right new person in, it gives the team whole new life.’
With continuous testing and adding ever-more data, the AI then helped identify qualities that were more likely to lead to high-performing teams. Interestingly, Vivienne found the two strongest predictors of a team’s collective intelligence were, firstly people’s average emotional intelligence, and secondly whether they had the ability to see things from other people’s perspectives. This combination led to psychological safety, building trust, and embracing various types of diversity which strengthened the team. Even something as simple as gender diversity can make a big difference. ‘Teams with similar balances of men and women always outperform teams that have an imbalance.’
In another role, AI helped Vivienne identify an issue where young women working in technical fields weren’t benefitting from mentorships with older males, and in fact were experiencing increased stress. ‘In aggregate numbers the employer couldn’t see it, but our systems could identify and reveal that by showing recurring similarities – like those between negative mentee experiences and the demographic information of mentors.’
Vivienne follows this with an example of work at a more personal level, looking at mobility – specifically, using someone’s phone to track the way they moved through physical space. This data and how it changed over time, in combination with analysis of online conversations, could predict an oncoming episode of manic depression. ‘When we analyse the data we can actually predict stress levels, absenteeism, heart levels, post-partum depression… Even day-to-day stress rates associated with individuals by looking at language and movement patterns.’ Vivienne is clearly motivated by the potential to support staff to more effectively manage and enhance their mental health outcomes.
The risks
Vivienne is not ignorant of the risks that such technology and information provides. ‘We should talk about the good and the bad of AI hand-in-hand, because they’re just reflections of each other.’
One pitfall is in the method the technology is used. Ideally, Vivienne explains, AI should be used for augmentation, not substitution. Outsourcing decisions to AI instead of using it as a tool is a recipe for disaster. ‘There’s an idea that these systems can’t be biased… they’re just as biased as we are, but in different ways. When we see the errors algorithms make, they’re often absurd.’
A second significant risk lies in the type, detail and volume of data collected – which can be deeply personal – and how employers will use it. ‘That’s a very reasonable fear,’ Vivienne agrees.
‘Many, many companies are deeply invested in using data sets on their own employees to shift ideas about work – for example, unionisation. If they could detect people who are more likely to unionise, they may not hire those people in the first place. That’s scary. And it’s a genuine goal of many big companies.
‘I’m first trying to validate those fears… Employers have a huge power imbalance with respect to their employees. We need ways of balancing this out again. It would be wonderful if HR departments did that but, in the private sector at least, HR is often there to protect the company.’
Vivienne says there is definitely a role for government to look at issues of data and AI abuse. But ultimately, she wants to empower individuals and give them control of their own data – perhaps in a similar way to the European Union’s General Data Protection Regulation (GDPR).
‘It’s something all organisations should be thinking about.’
The timeline
When asked when we’ll start seeing AI used more broadly in HR, Vivienne predicted that we’re more likely to see progress in areas that aren’t as heavily regulated by government. ‘Hiring has strong legal protections. But in health and stress management – that’s where we’re seeing faster adoptions by organisations.
‘AI is coming into HR on the edges, but it’s beginning to have more profound impacts. Progressively more work, particularly work done by university-educated professionals, is seeing automation eating away at it. A perfect example is a radiologist. This job is high-skill, high-training, but still routine. An AI can identify an abnormality on an x-ray. AI presents a really profound threat to a classically high-paying job.’
Anywhere that people are doing lots of routine labour is at risk of being overtaken by AI, Vivienne argued. But by the same token, wages and opportunities for creative labour are increasing. ‘If there’s one thing I can get across, it’s that AI is just a tool,’ Vivienne summarises. ‘It’s what we do with it that actually matters. There will always be a role for creative people.
‘The line I often use is that “An artist without a tool is hobbled. But a tool without an artist is nothing”.’
For large organisations including government, HR needs to start planning for an AI-ready future. We should be looking to embrace the opportunities that AI presents, with eyes wide open to the risks – especially protecting the rights of our people.