Scaling company culture with AI: an interview with George Swisher, CEO of LiiRN

Can AI help foster cohesive community in an organization? LiiRN thinks so.

Source: Simplilearn

Creating a healthy work environment that scales is something of a holy grail for all growing companies. As internal networks become more dispersed and organizational structures grow more complex, it becomes easier for communication disconnects to occur. How can companies continue to cultivate a shared vision and culture, and give employees a chance to define and improve both? LiiRN CEO George Swisher thinks the answer is AI-driven.

Swisher founded LiiRN, a people-centric, AI-powered transformation software, in 2018. The AI platform has a two-fold purpose: to help leaders make decisions based on employee feedback, and then allow employees to participate in enacting those decisions. The LiiRN platform collects customized survey data on leadership performance and company priorities. The AI synthesizes upward feedback, converts it into leadership performance ratings, and identifies quantitative and qualitative trends and findings to inform decision-making. The platform also invites self-nominated change-agents to shape and drive forward company-wide initiatives.

In an interview with Swisher, he shared how AI can drive rather than reduce personal connection, and help business leaders to listen to and lean on their people.

What problem are you solving with LiiRN?

LiiRN aims to help companies drive change through people versus processes. Many leaders working to design strategy end up working with small populations of people, doing surveys or doing stakeholder interviews. But trying to drive a huge change with the input of a small group of people is a disservice to both the firm and the company. People are fearful of change when they don’t understand it. So a few years ago I thought, what if I had the ability as an individual consultant to work with all hundred thousand employees in real time? The impact would be tremendous.

And so the idea was to launch a software that could do that, that could physically touch people as if it was someone you knew and who understood the big program that was going on out there and help the employee relate. When you drive change from the bottom up instead of from the top down, you avoid the education and awareness gaps that come with large scale change.

Companies can use our technology as kind of a middleware between the leadership and staff, to find the gaps between what leadership thinks and what the people on the ground are actually seeing and thinking. Our voting feature makes people feel like they’re part of the decision-making process. If you can do that for a company, say, that’s 100,000 employees, you’re able to help 100,000 employees feel like they’re contributing to a decision that the leadership is making. You get people who are more empowered, and I think that’s a big emotional feature of how you activate people. It automates some of the change management processes and helps leadership make decisions and investments that their company believes in. With ongoing feedback collection, you can create a dynamic feedback loop, to continually shape the change journey.

What are some of the most common pain points the leaders you work with encounter?

New leadership teams are sometimes nervous to listen to data and to draw conclusions if it can be interpreted in multiple different ways. It’s one of the reasons that we have moved to partnering with consulting firms with expertise in software-based data analysis. We use the data to quantify how many people activate and why. Typically, we see north of 30% of the total population raising their hand to be on a work stream in a specific change management area.

If you have lower adoption, we use the data we collect to understand why. We track when people opt out or say “I don’t understand what you’re asking and talking about.” This feedback surfaces whether the real issue is understanding and awareness, versus the willingness of people to participate. Alternatively, the data can also show if people think the initiative is misguided or has implementation risk. Leaders gain transparency through the software’s data analysis.

It sounds like you’ve found ways for AI to create more human interactions. What are the limitations to leaning on AI? In what ways can AI tools be anti-social, and how do you mitigate those risks?

If you’re going to trust the output of our system, you have to know it’s based on the right input. Potential biases to data come in so many different forms. Ideally, if we look at, for example, who is in the sample population that you’re getting information from, we’d account for any skewing as we analyze it. We have limited control, of which population, the stakeholder at the enterprises chooses to invite into our software. So if they choose to only involve the US population and use that information to influence the way they make decisions for their Asia-based population, for example, that clearly creates a lot of challenges, given the cultural differences. We work to screen out and limit bias with some of our onboarding screens and some of the setup and training that we do. We promote as much as we possibly can an approach of widening the sample size, to make sure that you’re involving as large a population as possible that is as diverse as possible. But there’s definitely limitations to it. It’s hard to solve it when you’re collecting what others choose to input.

Also, if there is a high concentration of a certain demographic in a company, we can’t control for who they’ve hired. So if they’re only getting information from a specific group of people that’s the majority of their population, it clearly sways the input that we’re getting and the resulting outcomes. So for us, I think we’re trying to maintain a middle ground where we highlight who companies are asking for input from and how it impacts the output. 

We’re focused on making our data inputs more comprehensive by integrating with more internal systems in our upcoming work. HR systems can provide added layers of data, like performance management data and learning data; systems like NetSuite provide more business performance data. The more that we can integrate, the more our machines can learn, and the more we can build better cases for the viability of the decision we’re recommending.

Change management in the context of technology often raises the specter of worker displacement. How can technology-based change management tools like yours help us prepare for an unknown future of work?

What I learned personally moving from a tech-enabled service businesses working with big enterprises to being a full software company is that technology isn’t replacing us. There is a fear of tech advancing too fast. But I think the bigger question is how do we reskill and retrain ourselves? And how will we hold the enterprises of the world responsible for managing change? Even if there are people who will be losing jobs, which is never a good thing, we have the opportunity to say, “Well how do we rethink what workers are doing and what new skills they need to adapt? And how can we help them do that?” Yes, we’ve introduced self checkout into the grocery store. But if we’re going to replace those people, what are the skills they have that we can still benefit from? They may be really great at customer service and customer success — can you retrain them to help people shopping inside the store, to create a personalized experience? Flipping the way that you look at it can help people understand the opportunity. Then we all advance. But a lot of companies don’t think that way when they’re developing or implementing automation technology.

It’s a large number within consumer retail and manufacturing — upwards of 70% of some of the largest companies and employers in the world — whose jobs will be automated away in the next 10 years. The magnitude of that is scary. Unless you retrain people to think about it as an opportunity and change the way that they’re actively pursuing alternatives, we’re going to have problems. Being a coder isn’t the answer for everyone.

From individual to societal data: taking on bigger, badder problems

We have all heard the saying that “knowledge is power”. And in today’s modern economy, data is the new knowledge, which makes data power. We see it evidenced in the collective $1.3T market capitalization of Google and Facebook, whose pixels and cookies track us all over the internet. These massive data collectors began with an focus on individuals. Now, as we collect data about communities, societies, and supply chains, those holding the data will have growing power to impact not just individuals, but whole populations. 

The power of system-level data

Not only are today’s innovators collecting data about individuals, but they are collecting data about populations and processes. For example, Biobot Analytics hopes to transform sewers into public health observatories for whole communities by sampling wastewater from strategic points in a sewer system. Such collective samples can reveal issues as significant as an opioid epidemic, in neighborhoods as small as a few thousand people. Data tracking also promises to improve the fidelity of supply chain processes. Blockchain has been seen as a high potential technology for stemming the circulation of counterfeit drugs as well as upstream labor abuse.

This begs the question, how great is this latent potential? Are we reaching an inflection point where we no longer need to play whack-a-mole, and can finally clean up the messy problems that have previously upended communities, especially in the area of public health?

With great power comes great responsibility

Certainly the intentions of these technologies are to protect citizens, from counterfeit drugs, from themselves in the case of opioid detection. The question becomes how to ensure that the intended benefits manifest and unintended consequences do not.

We have all also heard the saying that power corrupts. Knowing this, we are forced to ask the question, how might the power of data be used corruptly in our own society? If recent technology deployments are any indication (e.g. AI blocking female doctors from the women’s locker room), we must ask, will we ultimately just re-manifest the problems of society using data?

We’ve observed the rise of “Big Brother” social monitoring in places like China, where social infractions as banal as jaywalking are caught by sophisticated monitoring, and have repercussions. Outside of monitoring, we’ve seen the weaponization of predictive algorithms in prison sentencing, resulting in worse outcomes for minorities. 

Given these patterns, we must imagine how cases like opioid overuse detection could be handled in the worst case. If an opioid crisis is detected, how might treatment differ in a poor versus a rich neighborhood? Will the doctors be the police targets in the wealthy neighborhoods, and the residents targeted in the poor places?

Writing society’s story

This — bias perpetuation — does not have to be how the story goes. Data is being used to empower many under-resourced communities. For example, an AI predictive model was able to increase the successful identification of corroded pipes in Flint Michigan from 20% to 97%, enabling the city to afford remediation of an additional 2,000 homes. Data can powerfully determine how we direct our limited resources to otherwise overwhelming problems. 

Knowledge is power, and while deep knowledge afforded by data can help solve problems by exposing them, it does not guarantee that those acting upon them have the best solutions. Impact is dependent on the social systems we operate in — how these analytical tools are used and how their analyses are received. We must ensure that those who can access and act upon community data are as effective at testing their own assumption and biases as they are at pinpointing social problems. 

What it’s like to be Google’s customer rather than their product

We all know and mostly love Google’s products. I mean, can you even remember the e-mail client you used before Gmail? (I’m the increasingly rare ex-AOLer). However, every now and again something goes terribly wrong. That is when you realize there is no human you can call for help. This is because Google is not offering these products for free out of the goodness of their heart. They offer it in order to collect data on users so as to better target ads served to them. You are not the customer. You are the product.

A failure with a free Google product that you use for your day to day work can lead to considerable distraction and anxiety. I recently had a moment of fear when I was manning a shared Gmail inbox and had been automatically logged out. (Aaaaaahhh!) The password hadn’t been saved, and I had no way of requesting details or a re-set since I had not set up the account myself. What support was there to find a path forward? This page. That’s it. Buenos suerte.

And I’ve heard way worse horror stories of Google product snafus.  There was a bug in Andorid phones where text messages would get sent to the wrong person. Some bosses got texts intended for girlfriends – only some of whom laughed it off. The bug was reported by Android to Google, but within Google the bug was downgraded from “low priority” to “don’t fix”. Talk about being exposed. In another instance, a friend driving in Florida to Disney World experienced a Google Maps outage on the road, with only road signs to guide him for 2 hours. Google will strand you, because you are expendable as a non-customer.

By contrast, when I started using AdWords, they called me 3 times and accepted 2 calls with next to no wait time. Here is the impressive level of support given to new accounts. Adwords offers:

  • A free, human-led tutorial via the phone or Hangouts, where they show you around the AdWords interface
  • A free “Google Build” ad campaign, built by an AdWords support team member – offering a better-than-novice starting place for key terms to employ to meet your goals
  • Feedback on the ad campaigns you draft outside of their build
  • A free phone follow-up review of the Google Build to tweak and improve your campaign

This night and day contrast of experiences may not deplete the value of the “free” products Google provides – we also benefit from the network effects of so many other people using the same products. Yet it does remind me of the stark reality of who I am to Google in each role – a valued customer on one side, a target for sales on the other.

 

Big data for local pizza

I first read about Slice last summer and their impressive registry of 7,000+ pizzerias, banding together to take on mass-produced monsters like Pizza Hut, one pepperoni pie order at a time. I guess as a New Yorker, with a pizzaria every block or two, I never thought it too inconvenient to hop on the phone and dial in an order. But I have been troubled at the sights in smaller cities where people default to *shiver* Dominos… and like it! I want to preach to my peers “No, it’s not real pizza!” And Slice CEO Sela’s heart went out to the local shops that just don’t know how to throw up a good website. He’s incentivizing loyalty and sharing customer data with the mamas and papas outside of Papa John’s. Pizza for the people. I dig it.

 Love for the local pizzaria
Love for the local pizzaria