Archive | DAW Recaps

April 2026 Recap – The Data-Driven Brand

For our April 2026 meetup, we had Sara Kear—the CMO of Condado Tacos—talk about how analytics and data can be used to help with branding. Yes, Condado Tacos were served. If you missed out, sadly cbusdaw does not do delivery.

Sara pointed out that branding is really about how your brand makes people feel. We might think of “branding” as a package of fonts, logos, and colors: but it’s better thought of as what your audience thinks about you. While this might seem like the softest and most qualitative of data, it can be some of the most powerful to help you figure out what your company is doing right and wrong.

In particular Condado focused on understanding exactly who their best customers were. It turned out that this segment of their customers, called “socializers”, drove 70% of Condado’s repeat business! Listening to customers via socials, surveys, and focus groups and then being able to tie some of that data to actual customer behavior allowed Condado to make better decisions. That’s exactly the kind of data-driven decision making we’re always talking about, yet can frequently be so elusive.

Sara also explained that it wasn’t about getting data perfect: it wasn’t about having that mythical 360-degree view of what customers did, it was about taking a human-centered approach and starting by listening to people.

We were also happy to have donated to the Global Foundation for Peroxisomal Disorders on behalf of Sara for this talk.

Upcoming events we mentioned included:
Wakeup Startup: April 16 and May 21.
Columbus Startup Week, May 5-7 at COhatch Polaris
DataConnect: October 29th-30th
Tim Wilson @ Innovate New Albany’s TIGER Talks, May 15th.

Sara’s Slides

And of course a few pictures!

March 2026 Recap – Google Analytics Alternatives

For our March Event, we had Jason Packer talk about his newly released book Google Analytics Alternatives, 2nd edition.

Jason’s book is an amazing and stunningly comprehensive run-down of 15 of the top analytics tools in the field — an absolute must-read. Coincidentally Jason is also one of the organizers of Columbus Data & Analytics Wednesdays and happens be writing this recap.

While Google Analytics is still by far the most widely deployed tool, Jason believes we’ve entered an era where the challenges of modern data collection mean that for many sites there is a better fit to be found in another tool. That best fit could still be GA4, but thinking of GA as the default tool installed on all sites in all situations is an outdated approach.

Jason pointed out that there isn’t a “best” tool and we shouldn’t think of tool comparison as a competition. His preferred framing is to focus on tool selection (not comparison) and use that selection process as a chance to identify the data questions we’re trying to answer and try to find a tool that can help us do that.

We also talked about how feature comparison lists can be very misleading, and how playing around with live demos or free tiers of these tools can be a good way to learn before committing to a new platform.

Then we held a raffle and gave away 15 copies of the book!

A few links from the event:

 

February 2026 Recap – People Analytics 101: Making Sense of Compensation Data

For our February event, we took a dive into a topic that affects anyone who is (or wants to) draw a salary: compensation. Alex Moore from Moore Cooperative walked us through the ins and outs of how companies figure out how much to offer their prospects and how that has to fit into the ongoing world of what they’re paying their current employees.

It’s a messy world of competing interests and priorities, and a misstep can quickly snowball: hire someone at a rate that is “too high” and then have them stick around for years with steady percent salary increases, and they can suddenly be compensated outside of the organization’s defined pay bands.

Of course, the pay bands are tough to maintain, too. Reliable market data has a limited shelf life, and figuring out the “right” compensation is more than just matching job titles. The same level and title in one industry may get compensated wildly differently in another industry (often because the role itself is quite different). The cost of living varies widely across geographic regions, too, so that has to be accounted for, but then what happens with remote workers who choose where to live (or who choose to move!)?

Did we mention that pay bands are nice idea, but they can be maddeningly challenging to put into place when an organization is working to maintain a strong and enduring workforce? According to one study, more than 20% of employees are paid outside their company’s official salary ranges!

Of course, compensation is more than just salary. Enter the “total compensation” discussion: health insurance plans vary widely when it comes to their coverage and cost, 401K matches can be anywhere from nonexistent to generous, paid time off can be flexible and expansive or stingy, and even in-office requirements can be Draconian or casual. Some of these aspects of compensation are negotiable, and both Alex and an attendee who is a full-time compensation analyst vigorously agreed that every offer should be negotiated!

Alex covered a number of additional aspects of this space:

  • Varying regulations—country (although not the U.S.), state, and city-level requirements for pay transparency (the more you know: in Cleveland, employers with more than 15 employees must include salary ranges in job postings; of course, they can always try to take a page from Netflix and claim a salary range of “$150,000-900,000”)
  • Varying efforts by companies to make their pay “fair”—from deep dive analysis of their comp program and processes to instituting remediation plans to committing to pay transparency
  • Generational divides—one study showed that 89% of Gen Z employees are comfortable sharing their pay with their colleagues (which makes the Gen X author of this recap clutch his pearls)
  • Gender pay gap—yep, it’s still a thing; it was at least closing there for a while until COVID came along and appears to have reversed that trend

The audience was engaged and had a lot of questions. It was hard to not get to some biggies, like the question about how we know information asymmetry generally contributes to inefficiency, so why don’t companies just go with full transparency as the norm‽  Well…it’s complicated. But it was fun to ponder with the group!

Slides from the event:

And some pictures!

January 2026 Recap – Doing KPIs Right: a KEY to Analytics (and AI!) Impact!

Kicking off 2026 with a bang, we squeezed a healthy mix of long-timers and first-timers upstairs at COhatch Upper Arlington. We opened the event with a bit of a look back and forward on this meetup that has been running since 2008 (!) and followed up with presentation by one of our OG organizers, Tim Wilson, about KPIs.

Some of the highlights of Tim’s talk included:

  • How KPIs are at the core of one key way that organizations use data: performance measurement
  • How performance measurement—when done well—is the construction of a metaphorical time machine: we establish clear, outcome-oriented KPIs as a way to align on our expectations for what results we will achieve with a campaign, project, or initiative; that then allows us to look at results (in the future) and travel back in time (metaphorically) to objectively compare those results to our expectations.
  • This is simple, right? But not easy?
  • Can AI do that? Can we just ask AI. “How did my campaign perform?” We can, but the best response it will give will look like the response that a pretty lousy analyst would provide to the question: a puking of data with some arbitrary comparisons to other data that it can access. So, no. We can’t just ask AI. Performance measurement is about humans aligning on expectations for business outcomes.
  • What does work for this? Asking two magic questions: 1) What are we trying to achieve (with this effort)? and 2) How will we know if we’ve done that? That second question is a two-parter: it requires identifying one or more direct or proxy measures (KPIs) and targets for each of those measures.
  • Business teams (marketers are particularly guilty of this) loathe setting targets. It freaks them out. They have a lot of good-sounding excuses for why they can’t set targets.
  • But they’re wrong. No targets. No time machine. Ineffective performance measurement.
  • A (Mini) Wisdom of the Crowds approach is a great way to set targets, though, and everyone gets on board quickly: just have everyone come up with a target (their expectation) independently (since everyone’s got the assignment, no one feels individually exposed) and then share what proposed targets they came up with. This will always spark a thoughtful discussion and a quick alignment on what the target (or target range) should be.
  • AI also comes into this process when we talk about AI initiatives. They, too, can have their performance measured with those two magic questions—what business outcome is the effort trying to achieve, and how will that be measured?

Lots of good stuff. Tim even dressed up as the AI cartoon that was sprinkled throughout the presentation. And, he measured the performance of the session itself in real-time by polling the audience at the very end, which was also an opportunity for him to give away a few signed copies of his book, Analytics the Right Way, and a signed copy of John Lovett‘s book, The *NEW* Big Book of KPIs. All of that was really an excuse for him to create an R script that he’d have to run on the fly during the presentation. Despite creating the perfect opportunity for this scripting to fail, Tim’s prayers to the gods of R paid off and everything worked.

The slides he used (including the results of the measurement of the session itself! Spoiler: he handily exceeded the target for the two KPIs he’d established) are available below:

October 2025 Recap – Custom GPTs with Bryan Huber

For our October event, Bryan Huber walked us through how he’s developed and deployed a custom GPT within his organization. As the Global VP of Digital Marketing and Analytics at Comfort Keepers, his team fields a wide variety of marketing questions from his organizations franchisees at different levels of technical sophistication. To empower those questioners as well as lighten the load on his own team, Bryan developed a custom GPT that leverages the question-answering power of ChatGPT, but also grounds it in his own organization’s best practices and adds some guardrails.

So what is a custom GPT anyways? It’s regular ChatGPT, but with a series of available customizations, including:

  • Custom instructions as you would in general ChatGPT, but sharing them across all users of all chats with the custom GPT.
    • This helps your users engineer better prompts, and put them on the right path from the start with each conversation.
    • Instructions can also help control what the custom GPT can do, steering users away from problematic areas.
  • Uploading your own documents to a knowledge base.
    • For example, you could make your own internal best practices documentation or research interactive by uploading them to a custom GPT.
    • These uploaded documents serve as a way to ground conversions in your own vetted information and also make those documents searchable.
  • Restrict the features users have access to.
    • Bryan shared some examples of egregiously poor marketing images created in ChatGPT. By turning off the image generation feature in the custom GPT, this prevent users from making those images and instead guides them to using the custom GPT to help create marketing text and come up with ideas rather than making slop images that might not follow organizational guidelines.
    • Similarly removing the web search capability can help focus the output on the vetted knowledge base and not just whatever web search can dig up.
  • Create “actions”, in the form of external API calls.
    • For example if you wanted up-to-date currency conversion numbers in your custom GPT you could connect to an external API using your own API key and get accurate numbers there rather than relying upon outdated training data or slow web search (which might be disabled in your GPT!)
    • Part of Bryan’s roadmap is to connect the custom GPT to the Google Ads API which allows its users to get detailed real-time information about things like CPC costs of keywords.

All of this for zero additional dollars, as custom GPTs are included on all paid ChatGPT plans!  Please note that on lower-level plans the custom GPTs you create will be public by default and include their conversation data into future OpenAI training data (the latter can be turned off under “Additional Settings” once the GPT is created).

This functionality is not exclusive to OpenAI, Claude offers similar functionality in “Projects” and Google Gemini does in “Gems”.

He also walked us through his journey of rolling out this tool to users, from early adopters to a happy user base of over 300 users.

Bryan also provided us with his slides! Since he’s also an organizer of this event, he would’ve had a stern conversation with himself if he had not.

As always, the crowd had lots of practical questions!

September 2025 Recap – Piwik PRO

For our September events we welcomed sponsor Piwik PRO to Columbus for not one but two events!

On Wednesday evening we had Jason Packer of Quantable Analytics talking about tracking methods, and Piotr Słonina of Piwik PRO talking about how their product fuses different kinds of tracking methods together in reporting.  Piotr and Marcin Pluskota flew all the way to Columbus from Wrocław, Poland for the event! We’d like to apologize for their flight delays and inform them that a multi-hour delay in Atlanta is, in fact, a rite of passage.

Due to an unforeseen scheduling snafu we had our first “al atrio” presentation in the atrium of Rev1 rather than in our typical room location, but we made it work. Jason and Piotr pitched a double-header of a presentation that covered things like:

    • How can we track anonymized users in a privacy-respectful way, even if those users decline cookies?
      Our answer – by not “tracking” those users in a way that lasts beyond a short period of time, or in ways that could clearly identify a particular user. Jason felt this should probably all be done with cookies, but the regulations around cookies have caused many vendors to look for work-arounds. Some of those work-arounds are more privacy-respectful than others.
    • Where is the line between a session hash (like IP + User-Agent) and browser fingerprinting?
      Our answer – it’s not a distinct line, but tools that use invasive methods and provide durable fingerprints are on the wrong side of it, at least when used for tracking.
    • How do tools like GA4 and Piwik PRO handle these different types of users: logged-in, cookied, and non-cookied?
      Our answer – GA4’s default blended user identity has a tiered hierarchy of user id, cookies, and modeling based upon “cookieless pings”. Piwik PRO has a more flexible solution that uses session hashes and allows individual sites to choose their own adventure when it comes to dealing with the so-called “consent gap”.

Jason’s Slides:

Piotr’s Slides

For those looking to learn even more, we also hosted a follow-up seminar on Thursday. This was the first official Piwik PRO meetup in the US, and we were were proud to have done that in Columbus! Some of the highlights from Thursday included:

  • A deeper discussion of the Piwik PRO suite including Tag Manager, CMP, and CDP.
  • Discussion of Piwik PRO’s migration away from their freemium model. Pricing now starts at $40/mo, more pricing info here.
  • A sneak preview of how Piwik PRO will be integrating the Fraud0 anti-bot system into their platform.
  • Delicious food from Brassica, and a very restrained amount of griping about GA4 from the audience.

Still hungry for even more Piwik PRO? Check out the upcoming Piwik PRO day on October 21, a virtual event featuring speakers such as Simo Ahava, Brian Clifton, Steen Rasmussen, as well as CBUSDAW veterans Matt Gershoff and Josh Silverbauer!

Some pics from both events:

July 2025 – A Night at the Ballpark

Did you know that Data and Analytics Wednesdays were initially started (not in Columbus) as pure socializing/networking events? From the get-go, we’ve always included an educational component in ours because, well, you know, midwestern-purposefulness or something.

We break that format once a year in December with a content-free event. And, in 2025, we tried to break it a second time by having our July event be a group outing to a Columbus Clippers baseball game.

Alas! We failed to go entirely content-free because baseball, after all, is the OG analytics-oriented sport! Bill James! Sabermetrics! Moooonnnnneeeeyyyyybbbbbaaaaalllll!

So we had a little bit of content. Most of the attendees arrived early so that a few members of the Clippers scoring team could pop down to our seats for a little Q&A that was as fascinating as it was informal!

And then we watched the game!

The discussions in our seats (alas!) drastically outperformed the Clippers’ performance on the field. The Louisville Bats were up 7-3 halfway through the 4th inning, and the score remained there for the duration of the contest.

But, with each attendee armed with some Clippers Cash, a koozy, a pass to the Tansky Club behind home plate, and great weather, an excellent time was had by all!

June 2025 Recap – Reducing LLM Hallucinations

Our June event featured Ash Lewis from Ohio State Linguistics talking about why LLMs give us incorrect information so often, and strategies we can use to reduce this behavior.

While the standard term for this is of course “hallucination”, Ash pointed out that the term “confabulation” more accurately describes what is happening. Hallucination implies that the LLM is incorrectly perceiving something, however what we’re describing is not misperception. It’s AI creating statistically probable, yet incorrect information.

Wikipedia agrees with Ash and described hallucination thusly:

This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneously constructed responses (confabulation), rather than perceptual experiences.

Of course as any linguist would point out, we don’t get to prescriptively say how language should be used… so we’re pretty much stuck with “hallucination”.

Whatever the term, this happens because everything that an LLM creates is simply what is statistically probable. Output which is also true is coincidental to the process. In other words: it’s guessing about everything, it just happens to be right enough to be very useful.

If you think that this is an issue of the past limited to older models, here’s current example from o4-mini:

During the talk we verified our group’s nerd cred by knowing how this guy found out who his dad was.

So how do we reduce this problem as much as possible? Here’s Ash’s helpful field guide notes:

Our prompt about our group’s history has set ourselves up for failure by breaking most of Ash’s rules.
We made the following mistakes:

  • Not breaking down (decomposing) our ask into small components. We didn’t ask for a more granular question, like the year the group started or a list of previous topics, we asked for a whole history.
  • Not encouraging the LLM to check its own work, step through its reasoning, or provide sources, or indicate uncertainty. As soon as we follow-up and ask things like, “what is your source for attendance doubling” it will say it has none.
  • Not letting the LLM search the web (this is a form of RAG). The o-series models from OpenAI are pretty good at knowing when they should do this, and likely would have done so in this scenario.
  • Not getting more than one response. When asked a second time with the same prompt, it said it didn’t have enough information to give a response.

Ash then dug into details about the work she is doing with COSI (the Columbus science museum) creating an AI agent that can help visitors with questions about the museum. This work attempts to limit hallucinations as well as provide the museum a more affordable and privacy-friendly solution than just sending things to ChatGPT.

She also helpfully has provided us with her slides!

As usual — especially when it’s a talk on AI — the crowd had a lot of great questions!

And a few pictures from the event. All totally real. Really. Maybe?

May 2025 – Want to Be a More Impactful Communicator? Find Your Shaded Habit! With Ruth Milligan and Acacia Duncan

Our May 2025 event was all about communication. Specifically, it was about all the various flavors (or “genres”) of speaking publicly, be that on a conference stage, in a conference room to a group of stakeholders, in a client’s office for a high-stakes pitch, or to a camera on a video call where the participants may or may not have their cameras turned on.

Ruth Milligan and Acacia Duncan, two thirds of the author trio for The Motivated Speaker: Six Principles to Unlock Your Communication Potential walked a 50-strong audience of engaged attendees through an introspective and interactive exercise in identifying their “shaded (ineffective) habits” when it comes to public speaking.

What is a “shaded habit?” It’s something (or multiple things) that every individual has picked up over the course of their lives that feels natural and comfortable even as it gets in the way of effectively communicating.

The bad news? Everyone has them.

The good news? No one was born with any of them, so whatever those habits are for an individual, they can be identified and unlearned (or, at least, sufficiently mitigated).

Ruth and Acacia opened the session with some ripped-from-the-headlines ripped-from-their-clients-with-identifying-details-removed of communication failures and led a discussion with the attendees as to the root causes of those failures. From there, they prompted everyone to think about their own rhetorical style and what they could identify as their shaded habits. Attendees jotted their thoughts on Post-it notes that Ruth (and Tim) collected and grouped for review and discussion:

The most commonly identified habit? Overuse of “filler words”: “um”, “like”, “you know”. How to address it? Breathe! And shorter sentences. With pauses that give the period its due. [artistic license intentionally taken on the preceding sentence fragments. To make a point. Just did it again.]

Other types of shaded habits that came up included: rambling, talking too fast, not minding the clock, going into too much detail, not thinking sufficiently about the audience’s needs (what questions they want answered rather than what information the speaker wants to share), and more!

Some of the shaded habits were diagnosed as being different forms of stress responses, of which there are fundamentally four distinct flavors: fight, flight, freeze, and fawn. The tricky thing about stress responses is that they’re not going to just go away. They’re going to happen. But, by recognizing what our default flavor of stress response is, we can prepare for how to deal with it, be it by lifting heavy weights just before speaking (for real…!) or “grounding” ourselves (anchor feet to the floor, hands palm down on the table if sitting) or repeating a mantra (“This too shall pass” may work, but it can be whatever works for you!).

Following the exercise and discussion, we had a drawing to give away five copies of The Motivated Speaker to lucky audience members and then had a book signing (20% of the proceeds from the book sales went to Sanctuary Night).

As the emcee noted at the start of the meetup, our goal is for every event is for attendees to take something away that they can put into action within a week, and our May event absolutely delivered on that front!

Additional pictures from the meetup are below:

 

April 2025 – Using Predictive Modeling to Prevent Homelessness with Ty Henkaline

Our April 2025 event featured Ty Henkaline talking about work that he has done with non-profits in Franklin County to help better understand homelessness. Ty has been working with Smart Columbus’ Columbus Community Information Exchange Initiative (CIE) to produce research that utilizes data from the Mid-Ohio Food Collective (MOFC) and the Community Shelter Board (CSB) to help us better understand this growing problem.

As Ben Franklin — for whom our county is named after — famously said, “An ounce of prevention is worth a pound of cure.” No question this is doubly true for homelessness, and providing early warning to agencies that help prevent these crises is a great use of data.

But as Ty pointed out, this data is not always easy to come by. Our existing systems were all built separately, and data integration was never a priority. Sensitive data about at-risk individuals is a challenging arena to work in, and Ty emphasized both the value of having partners that were truly invested in making this system work as well as the potential value of additional data sources.

This “spike chart” was a huge hit with the audience, and shows the following things:

  1. A growing increase in services usage (in particular food banks) was a strong leading indicator of a homelessness.
  2. With far fewer data sources compared to LA, Franklin County was able to see a very similar effect. How often do you see that in data modeling?
  3. Individuals experiencing first-time homelessness continue to need an elevated level of services after the initial crisis. This reinforces the notion that prevention can do a lot to improve the overall load on the system.

As promised, Ty provided us with his slides, which contain lots of links and some calls to action! Try scrolling to navigate the slides, or check out the direct link here.

If you’re interested in helping or learning more, please feel free to message Ty on LinkedIn.

Check out the engaged audience!