We often get briefs from prospective clients who want to “streamline” some task or journey. Usually what this means is that there’s some regular task the organisation does that requires a lot of tedious manual labour and person-hours, and they want a clever way to automate that tedium away. You can probably think of a few examples from your own organisation, where there’s some mindless, repetitive task you wish you could just stop doing.

Whether you realise it or not, what you’re asking for in situations like this is an algorithm. But that doesn’t have to mean a huge, complicated codebase like the ones that power Spotify, YouTube and ChatGPT.

In its most basic form, an algorithm is just a consistent set of rules for responding to a specific situation. Your out-of-office auto-reply is an algorithm (between these two dates, respond to every message with a preset reply). Your alarm clock is an algorithm (every weekday, wake me up at 06:45). When Homer Simpson uses a nodding bird toy to keep hitting the “Yes” button on his keyboard while he relaxes on the sofa, that’s an algorithm — sort of (respond to literally every question in the affirmative).

Above

Believe it or not, the nodding bird is an algorithm

Algorithms can be great, and sometimes they’re no-brainers; who doesn’t love setting their out-of-office? But because of things like ChatGPT and other AI, algorithms are suddenly getting a lot more attention as a source of potential problems. With AI, those potential problems are both ethical and existential. How do we evaluate job applications or student essays when they might have been written by a machine? How can we protect creative jobs when an algorithm can write a news story without any human supervision? There’s a reason writers and actors have been on strike this year: algorithms threaten their very livelihoods.

The more advanced our computers get, the more tempting it is to streamline things that maybe shouldn’t be streamlined to begin with. Those big questions people ask about AI are relevant whenever you consider using an algorithm to “streamline” or “automate” something. Maybe you don’t need to worry about existential problems — yet — but you should still have the same, robust debate about whether that algorithm is actually a good idea, or whether it’s going to do more harm than good.

Here are five questions to help you decide.

Is your algorithm creating something new, or just doing a simple task with something that already exists?

Algorithms are excellent for doing repetitive tasks with human-generated content that already exists. For example, say you’ve published a huge bank of insight articles related to designing and building websites… An algorithm is the perfect way to relate insight articles together, so that somebody reading one page about building better websites can quickly see a list of other pages on the same topic, without a human having to manually create and update that list. It removes tedious labour (list writing and maintenance) without removing the work that a human might actually enjoy and be good at (writing original, insightful articles).

But having an algorithm write the insight articles to begin with? That’s a very different proposition, and the crux of most AI criticism: just because an algorithm is good enough to do things that used to require human creativity – whether that’s writing an essay, painting a picture, or even performing a monologue in a video — does that mean we should automate ourselves out of the process entirely?

An algorithm that generates content from scratch should always be a red flag. There are lots of practical reasons why, which we’ll get into below. But if nothing else, your website is a tool to communicate with your audience, and communication is fundamentally a human-to-human act. If you can’t be bothered to write your own content, why should anyone be bothered to read it?

Is your algorithm trying to remove humans from the picture entirely?

Maybe your algorithm is only doing repetitive tasks that wouldn’t require any human creativity to begin with — for example, by automatically importing event data from a ticketing system into a website. That might still be a bad idea if the goal is to get rid of humans entirely.

This might be an ugly capitalist goal, i.e. trying to cut someone’s job, or it might be a more benign one. If your rationale is “nobody wants to manually transfer event data from one system to another anyway,” on the face of it, that might sound like a goal everyone can get behind.

But most automated tasks should still have some human involvement, if only for a little oversight. Maybe someone in your events team starts entering a key piece of information into a different field in the ticketing system. Technically, the algorithm is still working perfectly, but it’s no longer bringing in the right information. If a human’s not around to check on it regularly, it might be weeks or months before anyone notices.

Bigger picture, though, keeping humans involved in those repetitive tasks can actually help them grow professionally. It might be dull transferring event data, or proofing a brochure, or whatever — but it’s also a great way for junior staff to learn in detail about everything your organisation does, and for all staff to keep up-to-date with new developments. If your events team puts together a killer season of performances which is then automatically imported into the ticketing system, and from there automatically imported into the website, how are the marketing and box office staff supposed to know what they’re selling?

Is your algorithm freeing up humans to do more creative work?

In some ways, this is the other side of the previous question. If your algorithm is removing hours of tedious manual tasks every week, what are your staff to do with their newly liberated schedule?

If the answer is, “devote that time to producing more creative content and campaigns that better engage core audiences and help each to new ones,” great!

But if the answer is, “devote that time to pointless meetings, or different tedious tasks, or sit around twiddling their thumbs,” well, maybe you shouldn’t be building that algorithm to begin with.

Again, there’s an ugly capitalist imperative here. If you can automate enough work to get rid of a job entirely, that might seem like a clear economic win. But I’d call it a creative loss. Like I said above, if someone has been doing a repetitive task for you for years, they probably know your business better than anyone else.

Good management isn’t firing staff when an algorithm can free them from a repetitive task; good management is finding new, creative ways for people to apply their knowledge in a way only humans can.

Does your algorithm make the human experience better?

How much do you enjoy calling your bank and talking to a speech recognition algorithm or picking from a list of pre-recorded choices? When was the last time you had a really rewarding interaction with a chatbot?

Lots of customer service interactions can be tedious, for staff and customers, and automating them in a way that makes the experience better — for both groups — is a laudable goal for sure. If I still had to go into a bank branch with a giro slip each time I wanted to pay my credit card bill, I probably wouldn’t have a credit card. Being able to perform that routine task in a minute or two, from my sofa, using online banking, is a vast improvement in the human experience for me. I expect it’s also a vast improvement in experience for bank staff who don’t have to faff around with giro slips all day.

But when I have to do something out of the ordinary that my online banking can’t handle, it’s frustrating and patronising to hear the same repeated message “Did you know you can do lots of online banking on our website at www-dot [...]” and to then battle through a maze of numbered options before I can actually talk to a human who can help me. I think it’s telling that, if you suspect your credit card has been stolen and you call your bank’s anti-fraud line, you still get put through to a human almost straight away.

This isn’t just about efficiency, either. If you call a business to complain and immediately get to talk to a real person who listens, understands, and does something above and beyond to fix your problem, you have a better human experience than if you get fobbed off with a canned response. The human at the other end of the line probably has a better experience too, because they don’t have to deal with someone who is already furious from dealing with automated responses for five minutes.

If your algorithm doesn’t actually make life better for humans, what is it for?

Above

What do you mean, “calling your bank is a drag”?

Can your algorithm perform 100% of the time?

To return to my example of the trusty out-of-office auto-reply: this is a great algorithm, because it works 100% of the time. When you switch it on, everyone who emails you gets an immediate reply saying you’re out of the office.

And to return to my example of a phone-banking customer service system: this is a poor algorithm, because it’s really bad at dealing with anything out of the ordinary. Sure, it might solve common customer issues really efficiently, but even if 75% of customers are able to complete common tasks through the system alone, then the algorithm is still failing 25% of the time.

So when you’re considering automating something, it’s worth giving some thought to how many edge cases are likely to come up. If your algorithm is constantly going to encounter scenarios it hasn’t encountered before, it will make mistakes. Then all that human time you’re freeing up is going to be spent dealing with those mistakes instead of doing something productive. So don’t bother!

Automating for your organisation

The five questions I’ve suggested here are some very broad ones that are likely to apply in most situations. In that respect, they’re sort of an algorithm in their own right — one that helps you decide which automation projects you should pursue.

But to my last point, these five questions won’t work well 100% of the time. There will undoubtedly be questions specific to your own organisation and goals that you’ll need to consider alongside them.

For example, we recently worked on a site for Chickenshed Theatre Trust. One of the core aspects of their mission is to create opportunities for anyone to get into performing, regardless of their age, physical or mental capacity, socioeconomic background or anything else that might, at other theatre companies, be a hindrance. For them, a huge principle I’d want to consider for any automation project would be whether the thing we’re automating is helping the organisation achieve that goal, or whether in fact it’s a barrier to it. Can that chatbot be accessed by a partially sighted person using a screen reader? Can a neurodiverse person navigate a series of numbered phone options? My point here is that automation must never be in conflict with the values of an organisation.

So when you’re considering an automation project, by all means refer to the five questions I’ve suggested here. But make sure you use some of that human creativity to come up with your own questions too.

Profile photo of Andrew Ladd

All things arts, culture and heritage Speak with Andrew