Coverage of day one from Digital Journal
On day one of the 2024 mesh conference in Calgary, a spirited group of tech leaders and unorthodox thinkers discussed how to drive innovation in an increasingly chaotic economic and technology climate.
All without a single PowerPoint deck.
mesh 2024 marked the return of the conference to Calgary, after its inaugural Alberta event in 2023. The conference is proudly PowerPoint-free and characterized by intimate conversations that forego bland corporate-speak for real talk about what’s going on in tech and innovation — and what needs to change.
The day one sessions focused on everything from how to build inclusive, innovative teams, to wild use cases for AI (robo investment advice, anyone?), to upcoming regulatory changes that could address consumer privacy in the age of artificial intelligence (AI).
The discussions were particularly pertinent given Canada’s well-publicized productivity challenges and the increasingly competitive global battle for investment, talent, and growth.
There were two broad, interconnected themes that defined the day: 1) the importance of diverse, inclusive teams in driving innovation and 2) the risks and opportunities of a largely unregulated AI landscape.
1. There’s (still) enormous work to be done to build diverse, inclusive workplaces. But it’s important we do it right.
There’s a lingering sense that some organizations might be weary of the conversations about DEI. But April Hicke, Chief Growth Officer at Toast, emphasized its value bluntly, “If we don’t have diversity, we simply don’t have innovation. Period.”
Hicke was one of the speakers on a panel titled “How to Build Teams that Fuel Innovation” where the discussion focused on how to create innovative workplaces and teams. Joined by Jodi Kovitz, CEO, Human Resources Professionals Association (HRPA), Alicia Wight, Co-founder, Pebble, and Avery Francis, CEO of Bloom, the panel mapped out the distance between where most businesses are with DEI — and where they need to go.
One key insight: Workplace design has to happen at a system level.
“If you do not design for [sustainable, scalable change in your workplace], it will not happen, and it will not move people in the right direction that you’re hoping for, from a behavioural perspective,” said Francis.
So what should those designs include?
The panellists had a few key suggestions:
- Understand that it is people’s diversity and divergent perspectives that drive innovation. Kovitz said you must create an environment where you can honestly, openly, and critically debate organizational issues and strategies before decisions are made.
- Create psychological safety (effectively the ability to take risks and offer opinions without fear of reprisal) for your team members. Hicke said creating a safe space to debate is one of the keys to unlocking high performance in teams.
- Offer programming to support historically marginalized people in leadership positions, said Francis, and orient around the idea that the best leaders are good at managing and supporting people who have different life experiences.
The value of this approach was noted in a subsequent ‘Innovation Showcase’ which recognized innovation and digital transformation leaders from under-represented communities across Canada. Many of the people profiled simply didn’t fit the standard issue blueprint of a tech industry leader.
Claire Dixon is the Founder and CEO of Neuraura, which helps address polycystic ovarian syndrome (PCOS), one of the most common, overlooked and underserved women’s health conditions. As a neurodivergent person, Dixon initially struggled when she immigrated to Canada. Founding her company was a way to find comfort in her own skin. “I started meeting entrepreneurs,” she said. “I found my community and my people.”
Ultimately, as was echoed throughout day one at mesh, a truly inclusive innovation ecosystem is an entirely practical endeavour. There are enormous problems and opportunities in front of us as a country and we need to leverage our entire collective capacity to deal with them.
“We are all born with the ability to figure things out,” says Margo Purcel, CEO of InceptionU, a not-for-profit learning organization that addresses the skills gaps in the digital economy. “How do we approach each other as a society of passion so we can actually solve the challenges we’re facing?”
2. The AI landscape is basically the wild west with few guardrails for consumers or companies — but that might be changing.
The long-standing focus on digital transformation for both private and public organizations has been amplified by the sudden rush to embed artificial intelligence (AI) in almost every sector of the economy. Every organization is now looking at AI for an almost unlimited number of use cases.
As a result, almost no government has been able to keep up with the privacy, regulatory, and innovative implications of the technology.
As just one example: We may actually be on the verge of AI displacing not just routine, repetitive white collar tasks, but high-leverage activities like providing consumer investment advice.
“One of the things that I think is really exciting about what some of these tools around you, call it AI, applied to the robo advisory space, is the ability to actually get personalized financial advising, personalized tax planning, personalized risk management, at a cost that you typically wouldn’t be difficult to achieve,” said Ben Reeves, SVP, Data Science & Engineering at Viewpoint Investment Partners.
AI is eventually going to be deployed in some form, in every country on earth. Without the right frameworks or guardrails, this runs the risk of furthering inequality or harming poorer countries in the interest of wealthier ones. Especially in the near term.
“As AI models are deployed, people from underserved or minority communities need to have a voice and say when AI development is incorrect or harmful to their community,” said Kate Carter, a Manager at Mission Impact Academy, which helps women develop AI skills. “And then companies need to actually listen to them.”
Closer to home, in Canada, more stakeholders – from government to the private sector to academia are putting energy into modernizing the way we think about privacy and AI.
Canada’s consumer privacy protection act — Bill C-27 — continues to wind its way through the parliamentary process, with the possibility of becoming law sometime in the next 18 months (the bill would ‘die on the order paper’ if the expected federal election is called in late 2025, which would necessitate the new government either reviving it or starting over). The bill would be a critical step in modernizing federal laws around individual privacy and the overall regulatory oversight of technologies like AI, whose impacts we are only now starting to understand. But not everyone is in favour of the government’s approach.
“I’m pretty cynical about the government’s approach to a lot of digital policy, and one of the reasons for that is that it has tended to establish very high level standards and then left it to somebody else to figure out all the details,” said Dr. Michael Geist, Canada Research Chair in Internet and E-Commerce Law, University of Ottawa.
Geist suggested corporate responsibility and industry self-regulation may be useful levers to addressing valid public concerns about data privacy and AI.
A final word: Human potential and AI capability are inextricably linked.
As you might expect at a conference focused on innovation, there were skeptics in the audience, particularly with respect to AI. But there were no luddites.
Almost every speaker echoed the need for people to wrestle with AI and how it will impact our society, our jobs and our ability to build the world we want to live in.
The question that was implicit in seemingly every conversation was “How can we ensure technology — and technology companies — serve people better?”
In the final session of the day, that point was driven home in a keynote discussion with Frances Haugen, the former Facebook employee and whistleblower who, in 2021, revealed thousands of documents that made clear that Facebook knew its products were damaging the mental health of teenagers, instigating violence in southeast Asia and Africa, and spreading disinformation.
Of course, there are just some of the impacts of pre-AI technology. But now the stakes have been raised.