Unpacking GenAI, Today: Matt Welsh on the Industry’s Path Forward

The CEO and Cofounder of Fixie discusses integrating AI into everyday tasks, the challenges of academia, and how language models are the next universal translators.

Unpacking GenAI, Today: Matt Welsh on the Industry’s Path ForwardUnpacking GenAI, Today: Matt Welsh on the Industry’s Path Forward

Matt Welsh, CEO and Cofounder of GenAI startup Fixie, has lived many lives across Harvard, Google, Apple, and Xnor.ai. His years building AI systems for low-end devices and navigating the breadth of GPT-3’s implementations give his work a unique vantage point in a still emerging but red-hot industry of AI integration.

I sat down with Matt to discuss his path from academia to tech, how his love for engineering systems led to AI, and why his company Fixie is on the path toward solving everyday problems for employees across the globe.

What were some core experiences in your life that led you to what you're doing right now?

I started my career in academia, transitioned to big tech, and then went to startups. I've been joking with my friends that my next step is almost certainly to be a VC. It's about learning and challenging myself in new ways. If I think about my life or career like a video game, it’s as if I get really good at a game and win it a few times. After that, I want to learn a new game instead of just playing the same one over, over, and over.

I did my PhD at Berkeley back in the '90s. Back in those days, startups were not a thing. The dot-com boom happened, then the bust happened. Startups were not really considered to be a place where the academically inclined went. It was mostly 23 year-olds who wanted to get rich quick. But that has since changed dramatically.

I remember graduating from college in 2002, and there was zero presence on my radar to “go do something in tech.”

It’s true. The dot-com bust really damaged tech’s reputation. If you were a really ambitious PhD student in the 90s and early 00s, you went to be a professor somewhere. If you were less ambitious, you went to a research lab like Sun Labs or IBM Research. If you were crazy, you went to a startup.

I had a job offer back then from a little startup down in Silicon Valley. They were building a search engine called Google. I could have been, I don't know, employee number 400 or so. I decided not to take it. Search engines were a dime a dozen, and I had very little confidence Google was going to expand my knowledge and challenge me. I was extremely wrong, of course. But it felt like the place to just write code and not think deeply.

Instead, I ended up spending eight years as a professor at Harvard. After a few years, I learned that professors rarely focus on interesting technical and research work. It's about teaching, mentoring students, serving on committees, traveling, and giving talks. For me, these things were enjoyable up to a point. The more senior I became, the less and less time I had to work on technical ideas, which was why I went into academia in the first place.

So, once I got tenure, I left for Google on sabbatical. The expectation was that I'll spend a year at Google on sabbatical and come back with new ideas that I could research at Harvard.

But after two or three months at Google, I was so much happier being in a place where I could build things all the time that people actually use. I had never tasted that before. It’s tremendously motivating when someone reports a bug. I would be up there first thing in the morning wanting to fix that bug. It gave me such a sense of pride, I think, to be able to solve problems and help other people.

After eight or so years at Google, I was managing multiple teams. Over time it felt like Harvard again: I managed instead of coded. I left to join a different startup called Xnor that developed a new way to run AI models on very low-end devices.

We had to leverage specific instruction sets and transform the code into something efficient. It was fascinating because, back then, I was a systems guy. I wasn’t really an AI person. I had come from embedded systems, performance, networking, and energy efficiency, so translating AI to work on a very resource scarce hardware platform was right in my alley. It was a wonderful way to transition from being a systems guy to an AI person.

After a year at Xnor, we were acquired by Apple. It was a great outcome for the company, and I was super happy that we could translate our work into products used by billions of people. Apple, of course, needs optimized AI in every one of their products. iPhones, AirPods, and Apple Watches are running AI models all the time.

I left Apple after a couple of months because I wanted to be back in startup land. OctoML was founded by a friend of mine, Luis Ceze, who's a professor at University of Washington. I think they had just raised their Series A, and when we talked and caught up, I ended up joining OctoML. I led the engineering organization there to scale the team by a factor of 10 or so during the pandemic.

And then you took the dive into founding your own startup with Fixie.

Yes. Several friends of mine had gone off to start their own companies. And I started talking to them a little bit about their experience, and how they were going about it. It seemed like something I could do, once I had enough background information.

One of my Cofounders and I actually applied to Y Combinator kind of on a whim. We had even missed the deadline, but to our surprise, we were accepted. The question became: should we take the YC money or not? I called a bunch of my friends, some of whom were VCs and some were founders. They all said, "You probably don't need YC given the seniority and the experience of the team. Wouldn't benefit you as much." So we turned down YC, and instead asked those friends how I could raise capital without YC. They didn't invest in me, but they did tell me how to do it.

Which is, if I were to simplify it: Build a deck, outline the problems and solutions, pitch it to tons of VCs, and get a ton of no’s until one or a few invest. It’s a ton of work, and requires a lot of feedback, but the path itself is not complicated.

Exactly. But I had not done any work in what my company wanted to do. We didn’t even have a demo or a proof of concept. That was a mistake. I was overconfident when people said to me, "You'll have no problem raising money. This is going to be easy for you. Look at your resume. It's amazing. You'll have term sheets next week." It was much more work than that.

The original idea for Fixie that we pitched was to take language models and apply them to software teams to make them more productive. Now, I want to emphasize this was in the summer of 2022. It was before ChatGPT lit the world on fire. GPT-3 was still a new thing, and there weren't that many people trying to use these models for integrating software systems directly. It's amazing how much things have changed in such a short time.

The idea was as if Copilot could help developers write their code. I’m religious about Copilot, you know. It has saved me so much time. It's amazing just how much it seems to know about your code and what you're about to do next. And it kind of reads your mind. So my idea was, "Well, why don't we apply the same kind of ideas, but to all the other stuff that takes software teams so much time, like reviewing code, tracking dependencies in a large code base, debugging production outages, and finding stuff in your documentation?”

A software engineer may only spend 20% of their time writing new code. A language model could speed up the other 80%. Our mistake was not building out a demo or proof-of-concept

What is Fixie today? And when did you shift Fixie to what you're doing now?

Well, we thought the first step was to collect a huge data set and spend thousands of dollars on cloud GPU cycles to train our model. We started experimenting with the existing GPT-3 model just to see what would happen if we taught it how to call into GitHub, and go from English to a GitHub query. It turned out GPT-3 was so good at this that it completely changed our minds about where to spend our time and money. Instead of building a separate model, we said, “What if GPT-3 worked with Google Calendar, or web search, or other services that people might want to use?”

Remember, this is before ChatGPT and all its plugins came out. At the time, there had only been a couple of research papers on this, and no one was really building this.

So we started building these interfaces between the language model and software tools. We found that it was really easy to do—and the problems you could solve went way beyond making software teams’ lives easier. We pivoted toward pitching ourselves as a platform for building applications that use large language models inside of them. Since we made that shift, the idea caught on with a lot of people. ChatGPT plugins continue to popularize this idea. But then you've got amazing projects like LangChain, LlamaIndex, and other open source things that achieve many of the same goals.

It's been validating for us to see everybody else recognize the opportunity here. It means we don't have to spend time explaining to people why it's a good idea, but now the level of competition and the pace of innovation is high. We had to find a niche to differentiate ourselves, and not build the same thing as 20 other companies or something, which is much more difficult.

Being a founder in the AI space right now is a little bit like playing a video game on hard mode. Every day, every week, there's a new development in the space that throws a wrench in your plans and completely changes your thinking about that.

How do you want Fixie to be a part of this movement?

So the focus that we're taking on at Fixie is to be the operating system to let companies, and possibly anyone, build applications using language models. We're basically saying, “Let's take all that complexity required to build an app: the models, the prompts, the vector database, all the machinery that's happening inside, and bundle it into a system that runs on the cloud.” It's a SaaS product that allows you to build your own natural language agents, and then connect them into your application.

Think: one platform to see, run, and edit AWS packages, virtual machines, storage, databases, and networking. Fixie is the cloud platform for building with language models. This way, it makes it as easy as possible for anyone to start and build their own app and abstract away a lot of the technical complexity.

In order to be effective at doing things in this space, you also have to spend a lot of time staying abreast of the state of the art. And as I mentioned earlier, innovations come fast. Every day, new research papers, models, techniques, and ideas are published. Companies who want to use language models in their stack are not going to be interested in having to track all of that innovation happening, but we can.

Do you have customers right now?

We have a couple of early customers working with us on pilot projects, including a few companies that are building interesting things with Fixie. We're helping them through that process. But most of our time this year has been spent preparing Fixie’s platform for our developer preview, which we just launched a couple of weeks ago.

The developer preview is meant to get feedback. We weren't going to be successful building an AI platform in a vacuum. We need to see what other people use it for, learn about their problems, and hear their complaints, gripes, and bug reports.

Today, anybody can log in and use Fixie directly without any sign up. It's completely free. A few thousand people are building on the platform, which is great. The next step is to work deeply with the most engaged people in the developer preview community and integrate it according to what they need. How could Fixie fit into their day job, and how can we make that happen? Our goal is to learn as much as possible now and worry about revenue and growth later, because the number of use cases is just huge and tremendously diverse. It can generate marketing copy, process documents, do enterprise searches, answer questions, support customers, and even run operations and sales. It's just countless potential applications for this technology.

Language models act as the kind of universal symbolic manipulator and translator.

They can ingest data in any format, allow you to query that data, process that data, and generate new data, with basically no programming. You can often just instruct it in English. So if you think about it like a natural language computer that allows you to do almost anything without having to write code, that is a tremendously powerful new thing that didn't exist even a year ago.

It’s a huge game changer. This is not a gimmick. This is radically changing the way we think about information processing, on a massive scale.

You’re working in arguably the most exciting industry right now. And even if you don't believe we're on the precipice of super intelligence, we’re still looking at a future where non-technical people have the agency to create real things with software, without knowing the esoterics of code. How is it going so far? How are you doing?

Well, you're right. I think it's either the best time or the worst time to be building an AI company right now. It is absolutely bananas out there.

One of the good things is, of course, the area being so hot that attracting employees, customers, and investors has not been terribly challenging. The cynical view that some people have is that many AI companies slap a user interface on top of GPT-4. We aren’t doing that, but it's still going to be difficult to ensure what you're building has lasting value, can generate real revenue, and get real customers involved with it.

One of the biggest challenges with AI-based products is knowing how to evaluate if it's working well. It was really challenging at Google, for example, to answer how people in different parts of the world were using Chrome. I thought, “Gosh, given this massive trove of data, certainly some broad patterns would emerge around what people's use patterns were.” You could break it down and say what fraction of people are spending their time hovering over a button before clicking it, versus their time on social media, video sites, and e-commerce. But even answering those questions is hard.

I still don't know how we're going to do that with AI. One could build AI products, and they could fail to gain traction. Was it because the AI models aren't working that well? Was it something having to do with a crappy UI? AI is a kind of magic black box. You don't have a great deal of understanding of what it's going to do a priori. You can't just say, “We'll just measure these metrics and go optimize for them.”

To me, it comes down to explainability. If the AI explosion that’s taking place today lands us in a world where language models are integrated into everything, we’ll have an entire new breed of products that were built with them at their core. There’s going to be this moment of, “Well, now what? Our entire world is being organized in this way, but we have no idea what's actually leading us to where we’re going.”

I think it's very true. As the models become more sophisticated and complex, the idea that we're going to be able to have that kind of formal degree of certainty about what they do is not likely to be tenable. Even before AI, it was nearly impossible to ask someone, “I'm running this program on my Windows laptop with these drivers installed and these different pieces of hardware and USB devices plugged into my computer. Prove what this thing will do with the following inputs right now. Give me formal proof of what is the behavior of the system.” It's just more obviously impossible with AI.

Of course, one could inspect the activations of all of the weights in the model as you feed it a certain input, and probably eventually predict what's going to happen. But in reality, that's not going to be a tractable problem. So how do we deal with that as a society?

We may end up treating AI models the same way we treat the employees we hire. If I hire someone to be a UX designer on my team, or a product manager, or a salesperson, I have to assess their ability to perform certain tasks in certain ways. But do I expect them to always do it the same way? No. Do I absolutely expect that they're not going to show up to work one day having had a couple of glasses of wine, and go onto the customer call, and say something inappropriate? I mean, there's no way to guarantee any of these things. So instead, what you do is you build processes around that so that you hopefully mitigate the negative impact of such likelihoods. And I tend to think we're going to need to treat AI models like humans that are also themselves somewhat unreliable.

Fortunately, society has learned to work with those constraints. I think we'll relearn how to work with those constraints once it's the computers doing the same kinds of things.


Tags: Success