Blog

June 2023 — Harnessing the Power of ChatGPT with Embeddings and Chat

Here at Columbus WAW, we’ve never claimed to be trendsetters, but we can hop on a bandwagon like a crowd of teenagers chasing a TikTok challenge.1 The challenge? The landscape is evolving quickly, so we wanted to cover a topic that would provide content with staying power, and we needed a speaker who could do that! Luckily, Pete Gordon fit the bill, and he delivered (and he delivered while sporting a Columbus WAW shirt)!

His slides are available here. And, Pete himself can be found around town in all sorts of forums that he runs or supports, including GDG Columbus, Ohio DevFest (the next one will likely be in Toledo), and Columbus Code & Coffee.

While the talk danced into more technical territory than we generally get to at one of our meetups, it did so in the service of helping attendees think through the actual applications for this brave new world of large language models (LLMs). At this point, even the most non-technical of us have created an OpenAI account and lobbed some questions at ChatGPT. Maybe we’ve even tried out Bard. We’ve read more posts than we care to admit with Thought Leaders explaining 25 ways that YOU can put these tools to AMAZING USE! In short, we’ve lived “in the web interface” or “in the app” when it comes to exploring these platforms.

Pete’s talk, while relevant to this approach, came at the topic from more of a developer perspective—thinking about interacting with these platforms through their APIs. This included a glimpse into what this looks like, but, more importantly,  provided a perspective on the give-and-take of an application interacting with a large language model.

And, he framed the presentation around the greatest cartoon series ever created.

Midjourney (Perhaps) Successfully Avoids Copyright Infringement with Its Rendering of “Pinky and the Brain hunched over a computer and writing code. Both creatures have rounded ears, pink tails, and red noses.”

Before diving into Pinky-the-Prompt-Engineer and The-Brain-Doing-Embeddings-and-Vector-Similarity, Pete provided some background and history of natural language processing (NLP), noting that 2012-2013 was one big jump forward with the emergence of recurrent neural networks (RNNs), and 2018 was the next leap with the emergence of Bidirectional Encoder Representations from Transformers (BERT). He recommended attendees give Andrej Karpathy’s (co-founder of OpenAI) recent talk at the Microsoft Developer Conference on The State of GPT a watch.

For prompt engineering (the Pinky role), Pete emphasized that there is an important difference between the “base model” in one of these platforms and that base model actually being employed as an “assistant.” A base model is not an assistant, but, with effective prompt engineering, it can be made to behave as one! That prompt engineering certainly can be a human being (or a dopey mouse) who has read the right articles on the subject and then practiced to hone their techniques, or it can be an application that is designed to iteratively prompt a base model via API calls. The exact same concepts apply either way—a developer just needs to have codified the techniques!

Pete then shifted to explaining embeddings and vector similarity (the Brain side of things), where at least a few attendees’s minds (including the author of this summary) were blown. Unfortunately, much more of this was demo’d live with code than being available in his deck, which is why it’s always best to attend in person rather than rely on a mostly-human-written recap after the fact!

In a nutshell2, when you have one of these large language models, you have a “model of unstructured data.” When you have other unstructured data (which could be a prompt, but it could also be just a statement or a document—some coherent string of words), you can use that as a query against the model to find out “where” in the model the data you’re passing in fits. That “where” can be represented with a vector of floating point values (think of those as being coordinates in an n-dimensional system that will melt your brain if you try to create a mental picture of it). “Yeah? So what?” you’re thinking. Well, that’s where things start to get pretty cool. If you’re working with a vector of numbers, then you can start doing “distance” comparisons of your unstructured data, be it other unstructured data you’ve passed into the model or unstructured data that exists within the model. The image above actually shows the resulting vector of floating point values when “Hello World how are you today?” was passed to the model. Then, the bottom part of the screen shows the sets of unstructured data within the model that are “closest” to that phrase (which Pete indicated were things like the “Hello, World” Wikipedia entry, since Wikipedia is one of the sets of unstructured data used to create the ChatGPT LLM). This part of the session prompted quite a bit of discussion as to potential use cases.

It was a broad, deep, and complex topic, but Pete kept it moving, and, as is the norm at these meetups, the audience was engaged!

Next month’s event will be tackling the same world of LLMs, but coming at them from an entirely different angle!

And pictures? Of course!

 

1 The “like a crowd of teenagers chasing a TikTok challenge” line was provided by ChatGPT. Some of the other suggestions from our future overlord were: “like a kangaroo on caffeine”, “like a clumsy penguin sliding on ice,” “like a squirrel on a sugar rush,” “like a herd of cats chasing a laser pointer,” and “like a herd of wildebeests following the migration.”

2The post author does not make any guarantees regarding the accuracy of the contents of this nutshell.