Bridging the Gap: vFunction's Approach to Addressing Risk and Leveraging Generative AI to Expedite Engineering

Cofounder and CEO Moti Rafalin discusses how vFunction’s team embraces the complexity of the problem and unique solutions to the widening gap between cloud innovation and enterprise reality

Bridging the Gap: vFunction's Approach to Addressing Risk and Leveraging Generative AI to Expedite Engineering Bridging the Gap: vFunction's Approach to Addressing Risk and Leveraging Generative AI to Expedite Engineering


Moti Rafalin launched vFunction in 2017 to solve an enterprise risk, and has taken the idea of falling in love with the problem to heart in order to deliver for customers. In just six years, vFunction has developed significant growth-driving partnerships, including with one of the largest banks in the U.S.

I sat down with Moti to discuss vFunction, what he’s learned from his multiple stints as CEO, and the future of leveraging generative AI.

What does vFunction do and how did you get to do what you're doing?

I'll tell you the story of how we started and how we’re actually an atypical enterprise software company. vFunction is a platform that uses AI to assess, analyze, decompose, and design microservices from complex monolithic applications. We then automate the decomposition. There are some manual steps in it, but it's an end-to-end platform that takes you from the monolithic application and really deals with the decomposition of the business logic, which is the hardest thing to do if you think about the modernization journey.

This is the third company where I’ve served as the CEO. One of the lessons I’ve learned from previous ventures is that you really need to focus on a big problem and fall in love with the problem, and not with a solution. That's almost a cliche. But we took it to the extreme.

We founded the company almost six years ago. When we founded vFunction, Serverless was the hottest thing out there in the market. I went to Reinvent in 2017, and Andy Jassy was giving a talk and you could see that the number of services that Amazon Web Services releases every year grows exponentially.

Every year, there are more and more services and functionality that the cloud offers, and you can think of that as a proxy to cloud innovation. On one hand, that's awesome, but on the other hand, you think about what's happening within enterprises. We saw this widening gap between cloud innovation and enterprise reality.

If you want to take advantage of all the great innovation that's happening in the cloud, and if you write new applications for the cloud-native, you can take advantage of Serverless or Kubernetes. But what about the thousands of monolithic applications that are out there, where, if you only lift and shift them to the cloud, you're not getting the benefits of the cloud? In order to get true benefits of the cloud, you need to modernize them.

Of course, modernization is a broad term. What we define as modernization is the decomposition of these monolithic applications into microservices.

That allows you to get accelerated engineering velocity, elasticity, scalability, and all the great things that the cloud has to offer. So that was the point where we decided: "Hey, there's no technology that actually does that. It's a very manual process, it's very risky, it's time-consuming. Let's go build a technology to help accelerate that type of transformation of monolithic applications to cloud native."

Now, what makes us unique in the sense of VC funding is that VCs prefer to invest in market risk, not in technology risk. In our case, it was pretty clear that there was a big market for it. If you run the numbers, there are over 20 million Java enterprise applications. 80% of them are non-cloud native. Even if you take 10% of those that you want to modernize and maybe just 5% of those to actually break into microservices, you're still talking about billions of dollars of the market.

Could you describe a use case and how you do that?

First, we combine dynamic and static analysis of the application. We think about dynamic analysis like an APM (Application Performance Monitoring) agent. We developed a unique APM agent specifically for the use case of modernization, where we collect a lot more information in production. For example: How Java methods relate to database tables. We parsed the communication to the database, we sampled the application threads hundred times a second, we constructed the full call tree, and we focused on Java and .NET. Then, we combined that with static analysis. All of that requires a lot of data science and machine learning where the goal of the algorithms in the system is to identify the domains within the application that make sense to decompose.

So if you think about an ecommerce application, what would be the possible microservices that you would expect to see? There's probably a payment service, a shipping service, and an inventory service, to name a few. These are domains within the application that you would expect to see as separate services. Our system identifies those or tries to do what we call a best effort to identify that by solving mathematical equations to maximize the exclusivity of these services.

It identifies the entry points that correlate to flows that it sees in production. One hundred percent of our customers eventually deploy our agent in production to learn the application. The system itself is deployed securely on premise or in the customer’s secure cloud tenancy. All the information stays behind the firewall, and we don’t get access to customers’ information.

The system identifies those domains and then presents that to the architect. Then, the architect can further refine the boundaries of the services, split or merge services, and put resources in the common library. The architect can automatically refactor classes and remove dead code. There are all kinds of actions you can take on the platform, and you get real-time feedback on how that would impact those services from a dependency, exclusivity, and class exclusivity perspective, which is a fundamental concept in our platform.

If a service has 80% class exclusivity, it means that 80% of the classes are exclusive to that service while 20% are shared across multiple other services. So the higher the class exclusivity, the easier it is to extract—less code duplication. We calculate that across a large number of objects and parameters in the application. Once that is presented to the architect, then you can make changes and further design those services.

The last part is automatic extraction. The system takes all that input, and scans the original source code. That is the first time we need access to the source code. We copy the different artifacts and create new projects with APIs for each of those services. We didn't change the original model, we just copied code from it, but in a very efficient way. We get rid of dependencies, we minimize configuration files, and you get the actual code of those services as the output of the platform, which you can then compile, containerize, and deploy onto any cloud environment.

Was the idea of decomposition and essentially creating microservices out of the app realized at the very beginning?

Initially, we just thought about decomposing a monolith in runtime to services. So you can actually deploy it on Serverless infrastructure. But what we found out is that the real motivation for microservices is really not about performance or cost. It's about engineering velocity. So the biggest motivation is the fact that you can actually have different teams deploy these services separately, and have separate release trains.

We had to go deep and really decompose the code. It wasn't something that you can just break during runtime and that is the biggest benefit. So there was some evolution. It was the same direction: Break the monolith. But we went down to the code level to provide developers with a mechanism to deal with this spaghetti code. And we're talking about applications of 10, 15, sometimes 20 million lines of code. It's impossible to do that without automation.

We then conducted a survey, we found that about 79% of these modernization projects fail. And what is failure? They take much longer. They cost three to five times more. You don't get the results that you're expecting. So these types of projects are really fraught with risk if you don't use technology like ours.

You've been playing this game for quite some time right now. How are you thinking about everything that's happening in terms of generative AI coding in your business?

I think it has a massive impact on developer productivity. No question about that. Developers will be able to generate code much faster, and possibly better code. Will that solve the problem of technical debt? I'm not sure.

At the end of the day, you don't get technical debt from day one—it accumulates over time. When you keep making changes and editing stuff, that's where you're cutting corners and that's where it starts to develop. I think it is something that develops over time if you don't monitor it. Generative AI can't decompose monolithic applications. It's not the type of problem that large language models can actually solve. What we do is analyze applications and identify the optimal way to decompose them.

Where generative AI will definitely be helpful is in doing all the mundane work such as refactoring the code, and upgrading frameworks after you've decomposed the business logic using a platform like ours. You can use Generative AI to get to a newer version of your Java or .NET faster. It will be an enhancement to what we do.

So, you've been a CEO multiple times. How is it different this time for you?

Everything is different. I think it starts with the people. My core team is made up of people that I've known for almost 20 years, different people from my various careers.

That's one thing that makes life a lot easier. Also, working with the right investors. As a second-timer, you’re much smarter about what to look for when recruiting or when dealing with investors. The type of investors you have has a direct correlation on the quality of a CEO's life. That goes a long way.

Tell me about your growth engine. How are you growing right now?

Currently, we're doing two things. We're relying heavily on partners, and these are enormous tailwinds. If you think about the hyper-scalers like Microsoft, AWS, and Google, they have massive engagements with the largest companies in the world where they're migrating them to the cloud. Customers are becoming much savvier about not wanting to just lift and shift.

They know that lift and shift is okay maybe for a subset of the applications, but if they really want to get the benefits of the cloud, they need to truly transform them into cloud-native and modernize them. And so there is an alignment of incentives here between our company and the cloud providers because we bring this type of technology that helps solve that problem.

Also, with the global system integrators, it's a win-win-win because the technology allows them to modernize more applications faster and with fewer people. This leaves customers happy, and their margins are higher.

As a startup, if you think about what we're doing, this is a blue ocean because there's no one else that is focusing on the transformation. If we were to go solely direct, that would require a massive investment in marketing. Whereas relying on these partners makes it much more efficient.

In the version of the future where vFunction wins big, what is going on with the business at that time?

First, monoliths are not going away. Let me give you an example. We talked to a unicorn, it's a decacorn maybe. I think they're now $10 billion. They started eight years ago and they built a monolith. They wanted to build something quickly that works. You don't start by building microservices. If you think about development, coordination, the deployment effort, and the overhead to develop from the beginning—microservices are not worth it. They built one eight years ago, and now they need to break it because they can't scale anymore.

Now guess what? They went into another line of business two years ago, repeated the same thing, built a monolith, and now they need to break it again. So, monoliths are the evolution of how software is developed. You first develop the monolith, and over time you break it into microservices. So we'll be around to help you with that.

Second, even if that weren't the case, there's still a mainframe out there…so these 20+ million Java apps are not going away so quickly. And then the last part of the answer to that question is that we also launched a new product about two months ago, which we call Continuous Modernization Manager.

Once you decompose a monolith into microservices, if you don't continue to monitor those microservices and remove technical debt, those microservices become a monolith that you need to refactor again. And we've seen that also with customers that went into microservices five years ago. Those microservices need refactoring today. And so with the same technology that we develop, we now actually offer what we call a continuous modernization manager that monitors your CI/CD and in production and actually alerts you on architectural drift, accumulation of technical debt complexity, the exclusivity of services, and so forth. So that's using the technology maybe to a slightly different use case - architectural observability.

I see. So it's really about how to continuously move from monolithic architectures to microservices which continue to pop up even in a cloud-native context.

Absolutely. And then how you maintain those microservices to keep them healthy, efficient, without technical debt, and with low complexity. It is a fascinating market. What we love is that our engineering team is amazing and they just love the complexity of the problem.

Tags: Success