How GenAI is Useful and Not a Fad

So it seems like we’re finally exiting the blockchain hype-cycle and moving onto a new one based off of generative artificial intelligence (GenAI, not to be confused with GAI which stands for general artificial intelligence). Weren’t the last 6 years of blockchain-based excitement fun? Blockchains kept getting rebranded and repackaged and by no means is this list complete:

  • Crypto currencies and mining - Bitcoin, mining pools, Dogecoin

  • Crypto securities - Initial Coin Offerings

  • Blockchains platforms - Ethereum, Polygon, Terra

  • Stablecoins - Tether, USDC, PYUSD

  • Distributed Apps (dApps) - Axie Infinity

  • Decentralized Finance (DeFi) - Decentralized Autonomous Organizations (DAO), staking, yield farming, cross-chain bridges

  • Non-Fungible Tokens (NFT) - Bored Ape Yacht Club (BAYC), OpenSea, CryptoKitties

How many did you follow, invest in, or get excited by? Personally, I was interested in BitCoin from my techno Libertarian leanings which can probably be summarized by Cryptonomicon. BitCoin might be all that’s left after all the dust settles with its primary users being the digital equivalent of preppers and gamblers.

When the blockchain hype cycle started, it was only known as BitCoin. It took some time before people re-used the technological underpinnings of it, the blockchain, to create solutions that needed problems.

For lack of a better term, I’ll have to call this new cycle the GenAI hype cycle. Like the relationship between BitCoin and Blockchain, in a few years I’m guessing we’ll look back at it as a “large model” hype cycle. LLMs are generally for text.

Is GenAI destined for the same boom and bust as blockchain? Or will it be like the dotcom boom/bust, followed by a lot of incremental progress that changes how society functions? Or will it just be a slow change like electricity?

tl;dr GenAI is a Big Deal and Isn’t like Blockchain

GenAI is not going to go bust simply from the point of view of tangibility. For the average person, blockchains didn’t solve any problems. It did solve problems for people living in countries with unstable currencies, money launderers, or people who like to bet. GenAI does something for everyone. It lets college kids focus on drinking, helps office workers generate fluff, and generates assets for video game developers.

But let’s dig in a bit more to see if we can figure out what kind of things will come from GenAI.’

THING EXPLAINER

Talking to my parents reminds me that it’s getting harder and harder to explain things since everything gets more complicated. It also gets harder and harder because marketing gets in the way. Breaking things down into simpler things make them easier to understand. Once things are easier to understand, it’s easier to figure out the long-term impact.

In terms of GenAI as the word a lot of the buzz has centered on, I’m going to focus on large language models (LLM) because it’s what I’ve played with the most. What’s the simplest way I would explain LLM?

  • An LLM is a database of words and concepts (or, in industry parlance, “tokens”) that maps out the relationships between all the words in a > 1000 dimension space

  • An LLM is an autocomplete tool

  • An LLM is a natural language processing (NLP) library that let’s me, a developer, write code to work with the semi-structured mess of natural languages.

On a side note, one of my favorite books is Thing Explainer by Rundall Munroe. In the book, he limits himself to using 1,000 common words to explain things from a microwave (food-heating radio box) or cells (tiny bags of water that you’re made of).

So let’s dive into some of the details of how I broke down LLMs.

LLM as a database

Using ChatGPT as an example, it is an LLM that was fed the internet (good, bad, and ugly). They tried to filter out the bad stuff but it’s like a swimming pool after someone’s fished the poop out.

  • all words are mapped in n-dimensional space where n is big ( > 1000)

    • the actual dimensions are opaque and determined by the model and training but to help with conceptualization, a dimension might be:

      • likely words to follow

      • a topic like astronomy

      • the language they belong in

  • You can query the LLM by giving it some text and it will return an “embedding”

    • it’s a list of numbers (vector) that are it’s coordinates in the database. Each number is a dimension.

    • related text will be closer

So ChatGPT has a mapping of all things on the internet and how they’re related to each other. We can take arbitrary text and find similar or related text even if they use different words or different languages! It’s also amazing to have so much of the information on the internet in a ~10 GB (or smaller if you want) download that you can navigate using natural language prompts.

From the sole function of a database, LLMs provide amazing value.

LLM as autocomplete

With mapping of everything on the internet and statistics of what words follow others, generating text on a subject or a mix of subjects almost seems like the first test case. GPT generally works by:

  • Give a prompt, calculate it’s embeddings.

  • Start with a word and find a likely word to follow that gets the generated text to be “closer” to the prompt’s embeddings. Repeat.

  • Use a parameter called “temperature” that adjusts how “far” you want to the autocomplete to stray when selecting the next word.

    • Using example phrase “the sky is”, with a low temperature you’d end up with “blue” or something similar. With a high temperature you could end up with “electric” or “green.”

    • When GenAI “hallucinates,” people say the temperature is set too high.

      • What people like to call hallucinations, I like to call crap. An LLM doesn’t “see” things. It doesn’t know truth from lie. It just has no regard. For those with time, this is an excellent essay on crap: wikipedia summary; book.

Under the umbrella of text generation, the “autocomplete features” of LLMs can be used to summarize, translate, transform, and edit. Just be careful with temperature and the amount of randomness you want to introduce.

Unfortunately, a real problem of public LLMs like ChatGPT. https://xkcd.com/2169/

LLM as a NLP library

A LLM is an awesome natural language processing (NLP) library with the downside that it’s slow. It makes up for it in my opinion because it’s easy to use and so general purpose. I’ve used it for entity extraction (NLP) and taken advantage of it’s “database of the internet” capability.

  • As a finance example, you can prompt GPT in plain English: “Given the following paragraph of text, extract all company names”.

  • You can then ask to return it to you in a JSON as a list.

  • Or even better, as a JSON dictionary where the key is the exchange ticker and the value is the company name.

  • Then take the output and feed it into Python’s eval function.

It’s also great for fuzzy matching, figuring out the top 5 most relevant articles out of a pool that are related to portfolio names, in a largely English (or whatever other language) API.

LLMs bring NLP to the masses (of people who can write code).

Synergy!

Amazing things happen when you combine all three aspects of LLMs (database, autocomplete, NLP). I had fun with the example below:

My prompt (screenshot, article cutoff):

The response (screenshot):

In the example, I use ChatGPT to take a foreign article to do the following (with the type of stuff I’d have to historically do in parentheses):

  1. pull out people (entity extraction, database)

  2. translate names to English (database)

  3. provide a description (database and autocomplete)

  4. format into json (autocomplete)

Kicking the Dead Horse

Bringing things back to the blockchain hype cycle to give it a swift kick. What is blockchain in simpler terms?

  • Distributed and decentralized database at a cost of speed/efficiency

  • Runnable on untrusted hosts at a cost of speed/efficiency

Neither aspect is compelling on its own, unlike LLMs. Both are important though if you’re trying to create a central, public ledger like BitCoin that’s impervious to quantitative easing and isn’t owned or run by anyone. For most other ideas in blockchain space, an annoyingly boring question is “how is this served better with a blockchain instead of a regular database?” It’s amazing the hype lasted as long as it did.

Published in 2020 https://xkcd.com/2267/

All Aboard the (non)Hype Train

The future looks bright for what I hope gets rebranded as “large models” of which the generative capability is one aspect. Whether it’s used for text or images, having so much information organized into a relatively compact n-dimensional database can empower so many future applications. Having the ability to process natural language and to generate output back to people (or machines) is going to find itself in every future project. Unfortunately, you can probably expect your application installs to be bigger and more taxing on your CPU.

Shameless Plug

During the day at System2, we’re about using code, math, and data to help our clients find insight and make better decisions. At the core though, we’re about taking advantage of the latest technology to do cool things. With LLMs, we’ve already built tools for clients that let them search and summarize company research, earning calls, and filings [see prior post] or tools to surface news and articles related to your portfolio. We’re all pretty excited by the things we can do (time allowing) using GenAI.

matei zatreanu