Blog

May 2024 Recap – Getting Real with AI

At our May 2024 event, Nick Woo from AlignAI shared a thoughtful and pragmatic perspective about how to approach figuring out what use cases are (and are not!) appropriate for AI. The turnout for the meetup was strong, and the discussion was lively!

Nick started off with a handy definition of machine learning:

“Machine Learning is an approach to learn complex patterns from existing data to make predictions on new data.”

Oh. Sure. Seems simple enough, right? But that doesn’t include generative AI, does it? As a matter of fact, it does:

  • The existing data is what was used to train the model
  • The new data is the prompt that is provided to the model (!)
  • The response to the prompt is really a prediction when the model processes that new data (!!!)

Nick also outlined the anatomy of an AI use case:

  1. Business Problem
  2. Data
  3. Training
  4. Model
  5. Accuracy Metrics
  6. UX/UI

Which step is the most common stumbling block for organizations’ proposed use cases? The “Data” one—there needs to be sufficiently scaled, cleansed, and complete data to actually develop a model that is useful. Oh, and then that model will likely need to be refreshed and refined with new data over time.

The most neglected step in the planning of an AI project? The last step: actually thinking through what the user experience should ultimately be when the model is put into production!

Nick was quick to point out that it is easy to treat AI as a hammer and then seeing all the world as a nail. If there is a simpler, cheaper, equally effective way to address a particular business problem, then addressing it with AI probably doesn’t make sense! He also acknowledged (as did several audience members) that we’re currently at a point where there are executives who truly do just want to be able to say, “We use AI,” which means some projects can be a bit misguided. This phase shall pass, we assume!

Another discussion that cropped up was measuring the ROI of an AI use case. Nick noted that this can be shaky ground:

  • AI technology platforms pushing to measure impact simply based on the adoption of the technology (rather than quantifying actual business impact)
  • Minimal use of techniques like controlled experimentation to quantify the impact (there is simply too much excitement currently to create interest in withholding the magic from a control group in a disciplined way)
  • The ROI of an AI project can be thought of as “the ROI of an OPEX project”—organizations that are disciplined about measuring the impact of non-AI OPEX projects should be pretty good about quantifying the impact of their investments; it’s just another tool in their toolkit, so the measurement mindset can be the same

And… there was more, including an example scoring matrix for prioritizing use cases across multiple criteria!

A recap post and the slides really can’t do the evening justice, but it’s better than nothing. The recap was above. The slides are right here:

And some pics from the evening: