Blogs

post image

Introducing AI to the Enterprise

  • 20 Sep, 2018
  • Mark Strefford
  • AI


When I look back at my experiences of introducing AI to the Enterprise, the lessons I’ve learnt and the things to do to mitigate the risks involved in introducing a potentially ground-breaking new way of working, they come break down into the following key areas:

Hype

We are all aware of the press hype around AI, with high profile examples such as AI learns a new languageAI can make calls for youself-driving trucks will replace driving jobs, etc.

However, there is also the internal organisational hype related to AI. This can take the form of:

“we’ll be better than our competitors”

“it won’t work for us”

“just make it happen”

“we’ve got to be first”

“we’ll see what everyone else does”

“let’s get loads of data scientists”

“we’ve got big data”

“we haven’t got big data”

“let’s automate our business processes and be more efficient”

“let’s automate our decision making”

In reality though, the truth is very different from this.

The Organisation’s Strategy and Appetite For Risk

There’s a scale here, some organisations are leaders, they want to adopt new technology actively and use it for a competitive edge. Typically these organisations are already making great use of data, they have a clear strategy of adoption of new technologies, and are often easier to work with. Examples of this are likely to be companies that have a culture of advancement, often in their normal operations with their customers who expect the best, the newest, the more advanced products in the market. This is likely to be the best place to further AI adoption

Other organisations wish to be leaders but are lacking the innovative culture and know-how on how to make this happen. I’ve seen this in public sector and larger older companies where there’s an appetite to be more operationally efficient but an internal cultural dynamic of disparate business units, technology teams and reluctance to change make this difficult.

Where things can get challenging are with the late adopters. These are the companies that prefer to wait to see what their competitors do, to take the safe option, they have a very low appetite for risk. They may look to adopt AI in smaller scale pilots initially.

Who Champions AI?

Buy-in is important, board level buy-in is great, but if the teams on the ground are not motivated then progress will be slow.

Conversely, buy in from the teams on the ground is great, a team ripe for adopting new technology gives a way to start building momentum. But if the funding or business strategy isn’t aligned, then at best you’ll get piece-meal delivery of proof-of-concepts that don’t get traction and more likely a project that fizzles out after a few months.

It also comes down to levels of awareness across the business. The techies might, and probably will get AI, but the business are still in the hype or don’t get it at all. And remember users are stakeholders too! They might have done the same job for 10 years and their only experience of AI is face detection in Facebook, or using Alexa. They might see it as magic that can do anything.

What will AI do for me?

So let’s revisit the hype again… What will “The AI” do for me? AI will mean significant numbers of job losses, right? It will answer all our prayers, right?

There are many jobs that are repetitive, and yes these are prime for automation whether that’s RPA or more advanced AI. But where I see the value in AI is giving back a meaning to work. As an example, if 40% of your activity is repetitive or involves decision making that can be automated, imagine what business value can be gained from the other 60%. This is likely to be the higher value work, or the areas whether there is significant risk to the organisation (think banking and fraud, or Governments and borders). A significant number of cases are typically OK, they’re good to go, yet what about the high-risk ones.

An approach that works well is to focus the staff on these more complex cases, shifting them to higher value activities. It gives staff greater job satisfaction, the chances are each case is unique or suitably different from others. Yes, I’m sure AI will get there soon, but in an ever-running arms race between organisations and customers wanting bigger/better things, more complexity in our daily lives, or more advanced threats, many organisations are not placed to automate everything. Think self-driving trucks that will drive from one city to another, but would need a driver within a complex urban or industrial area.

But before you go and automate all these process, remember the classic RPA mandate:

Never automate a bad process!

OK, this isn’t strictly true 100% of the time. There may be good cases where automating a bad process frees up people, budget or time (aren’t they the same thing?) that can be re-allocated to more pressing business needs. But this should be considered carefully rather than a carte-blanche approach of “automate everything we can”.

The Data

How many of you have heard the saying

“IT projects would be simple if it wasn’t for the users?”

In my experience though…

“an Enterprise AI project would be simple if it wasn’t for the users and the data!”

In some situations, taking a readily-available data set and training a model will work, for some organisations they’ve already started or are mature in the data world (whether that’s big data or not), they know what they have and are already getting value from it.

Other organistionsare on this journey, they have a reasonable view of what they have and the value it brings.

In other situations the data is simply not known, or is embedded in back-end systems that means it’s difficult to access and in some organisations they don’t even know what data they have.

What I term as “my data” can be evident in highly political, or even highly regulated organisations. Just because the data exists doesn’t mean you can get access to it or use it for your purposes.

Synthetic data is an option, but make sure it mirrors the characteristics of live data as close as possible. Traditional IT teams are used to developing against synthetic / test data and don’t always get the implications of this approach on AI.

Accessing data is often an organisational problem rather than purely technical. Getting access to the data early is critical, and I’d even suggest doing this before any commitments to project delivery are made. Treat this as a data investigation spike. I’ve worked on projects where we’ve had a dataset of 4000 samples and got great results, others where the business turned up with 12GB of documents and in the end we used less than 100meg to get a reliable working model.

Getting this wrong, or making promises without understanding the data landscape, can be a very quick way to kill any appetite for AI in some organisations.

Get the Users Involved

There’s many great AI algorithms out there now, you can do so many things, and there are advanced every day. But when it comes to enterprise adoption, the latest wizziest technology might not be what the users need. And whichever level of advancement you take with your algorithms, remember that the users don’t even see it. They see what it does for them and how it improves or hinders their daily work.

An algorithm that delivers 85% accuracy on a test set might be the best thing you’ve ever delivered, but if that 15% cause 85% of the work in terms of fixing or rework, then you haven’t gained anything (and more likely you’ve lost).

Understanding the users pain points is critical, be clear on what they are and how to measure improvements to them (not just the algorithm), and work on that. AI might not always be the answer, it may just mean you have to design a better GUI!

I’ve seen AI-based solutions work well with a 60% accuracy purely because the remaining 40% was understood, processes were in place to deal with it, and the users relished the fact that’s where they needed to focus, which used to be on the 100%!

Resourcing Delivery

“How many data scientists does it take to…?”

Well I’m sure there’s a joke there somewhere, but in reality that’s only part of the answer!

As with any business-driven IT project, there’s a wider context to fit this into. Think about the rest of the team, the skills you need. AI is unlikely to act in isolation.

Most projects I’ve worked on have involved a mix of skills to deliver, such as:

  • Business Requirements
  • User experience
  • Integration with APIs / back-end systems
  • Operational support
  • And good old delivery management!

A solution needs to get data from somewhere to infer an outcome, this outcome may need to be sent to a backend system for further processing. I’ve only worked on one solution where this wasn’t the case, we had a standalone dataset and a user interface, but this is rare.

I sometimes find that data engineers or scientists can do a number of technical roles and for an early stage PoC, but as you move closer to delivery then you want experienced skills in these other areas. Don’t ruin an awesome AI solution with a UX that doesn’t work or by sending corrupted data to downstream systems!

A Candidate Delivery Approach

This diagram shows a candidate delivery approach for an AI project:

  1. Understand your business problem and what data is available. Depending on the organisation , this could take a few hours or could take weeks
  2. In many situations you’re asking if this can be done, so start small and build a proof of concept. Understand the data, does it add value? It’s important that the business remains engaged, someone in the organisation is going to sign off the budget for future phases based on whether they’re happy you can do this!
  3. Take the proof of concept into a pilot, let real users use it, get feedback, iterate the approach. This may require some integration with back-end systems, after all AI doesn’t act in isolation!
  4. Take the pilot and productionise it. This can include improvements to the model, but is also likely to improve the data pipeline, robustness to edge-cases, alerting when something fails
  5. Run-time! It’s important in this step to understand how the model works over time. Does it self-learn (and if so is it learning the right things!), or is performance degrading.

In Closing…

Remember though that all organisations are different, there is a mix of approaches that can be taken, and my aim in this article is to highlight some of these in order to de-risk projects and leave the organisation with an appetite to continue on this journey.