Our Guest Ramin Hasani Discusses
CEO of Liquid AI Ramin Hasani Says a Worm Is Changing the Future of AI
Today on Digital Disruption, we're joined by Ramin Hasani, co-founder and CEO of Liquid AI and a machine learning Scientist. Ramin Hasani is the co-founder and CEO of Liquid AI and a machine learning scientist at MIT's Computer Science and Artificial Intelligence Lab (CSAIL). Previously, he was a Principal AI & ML Scientist at Vanguard and a Research Affiliate at MIT. His work focuses on robust deep learning and decision-making in complex dynamical systems. He earned his Ph.D. in Computer Science from Vienna University of Technology, with his research on Liquid Neural Networks receiving global recognition with numerous nominations and awards such as TÜV Austria Dissertation Award nomination in 2020, and HPC Innovation Excellence Award in 2022. Ramin is also a frequent TEDx speaker.
Ramin sits down with Geoff to share his new and unique approach to AI, inspired by biological and physical systems that could redefine how businesses use machine learning.
Ramin shares insights from biology, specifically the neural structures of simple organisms that have led to the development of Liquid Neural Networks (LNNs). Unlike traditional neural networks, LNNs use differential equations to model dynamic decision-making processes, making them more adaptable for real-world applications. They discuss the advantages of these approaches and the scaling of these models. They also talk about the future of AI and how it may not just be about bigger models but smarter, biologically inspired intelligence capable of transforming how we interact with technology today and into the future.
00;00;01;00 - 00;00;06;08
Geoff Nielson
I'm super excited to talk to him today. I think that, you know, as the CEO of liquid, I
00;00;06;08 - 00;00;12;14
Geoff Nielson
he's got these fully new ways of thinking about, you know, AI, this underlying technology of his,
00;00;12;14 - 00;00;13;20
Geoff Nielson
learning models.
00;00;13;22 - 00;00;16;07
Geoff Nielson
I think it's going to revolutionize the way that,
00;00;16;07 - 00;00;19;23
Geoff Nielson
frankly, all businesses could be using AI. And, you know, a few years from now.
00;00;20;15 - 00;00;29;16
Geoff Nielson
Hey, everyone. This is digital disruption. I'm Jeff Nielsen, and joining us today is remain Hassani, who is co-founder and CEO of liquid AI.
00;00;29;18 - 00;00;53;09
Geoff Nielson
Ramin super excited to have you here. Just jumping into it. I mean, whoever came up with the name liquid AI, liquid neural networks, first of all, from a marketing perspective, I mean, I love the name. Can you maybe just walk us through, though, you know, what is the technology? What makes it different? You know, hopefully not at the PhD level.
00;00;53;25 - 00;01;19;09
Ramin Hasani
Yeah. Definitely. Definitely. When I was, So. So the short story about that, is that, like, we started looking into how we can bring in insights from biology and physics into machine learning? Because we wanted to see what are the mathematical operators that we can find in biology that doesn't exist, or we are not using them right now in this space of, neural networks.
00;01;19;10 - 00;01;41;29
Ramin Hasani
Okay. When we when we were designing these things, when we started this project was 2015 and everything started in Vienna, Vienna University of Technology, where I started my PhD and, and, the professor I was working with together with my current CTO, at the time, but we started looking into the brain of a little warm.
00;01;42;01 - 00;01;52;24
Ramin Hasani
The worm is a very, very tiny little worm. But but it is a very popular worm. It has, a very, you know, it has one so far. Four Nobel Prizes for us.
00;01;53;28 - 00;01;55;14
Geoff Nielson
Wow.
00;01;55;14 - 00;02;05;12
Ramin Hasani
similarity in genome with humans. So that's why it's really useful for us, you know, to understand how its how its nervous system or cells in general behave.
00;02;05;20 - 00;02;14;28
Geoff Nielson
So that's what you mean by popular award winning work? Yeah. Yeah. Yeah.
00;02;14;28 - 00;02;32;15
Ramin Hasani
and then so the body of the worm is transparent. So every everything that happens inside the worm is very kind of, you know, you can see that under the microscope, you know, like, and that that makes it like a great model organism, you know, and then what, what we have done, we started looking into the data of the nervous system of the worm.
00;02;32;18 - 00;02;46;09
Ramin Hasani
The reason being, if you understand, at the worm like and by the way, so in the theory of evolution, 600 600 million years ago, we got split. Humans got split from this worm. Okay. So if you think about it like these, some somehow our
00;02;48;09 - 00;02;49;12
Geoff Nielson
Yeah.
00;02;49;12 - 00;02;53;06
Ramin Hasani
we we wanted to see we want to understand at the core what are the operations.
00;02;53;06 - 00;03;23;22
Ramin Hasani
If you understand how nervous system works at the level of the worm, maybe we can take that and then, you know, scale it into like better and more sophisticated learning systems. And that was like the motivation of what we wanted to do. We wanted to understand how this thing works. We started, on the, the elegance and the mathematical operators that we started learning, and this new type of, neural network that we designed, I called it liquid neural networks.
00;03;23;24 - 00;03;29;21
Ramin Hasani
And, my professor at the time, rather was. So he wanted to call them, he wanted to call them regulatory
00;03;32;02 - 00;03;33;08
Geoff Nielson
Yeah.
00;03;33;08 - 00;03;45;26
Ramin Hasani
the time, because the these are kind of input dependent kind of systems that they adapt their dynamics or regulate their dynamics as they go forward based on the inputs that they receive.
00;03;45;29 - 00;04;06;20
Ramin Hasani
And that was the operator that was, you know, inspired as like how neurons exchange information with each other. I called it liquid, you know, so that that that was like 2017 when we, discovered this thing and we showed that we 12 neurons, like worm inspired neurons, you can drive a car, you can, drive a robot, basically
00;04;07;22 - 00;04;08;23
Geoff Nielson
Wow.
00;04;09;03 - 00;04;38;29
Ramin Hasani
And then we showed that with 19 neurons with and with, with a convolutional neural network on top of it, you could, you could, drive a camera based autonomous driving, full blown. And that was that was what we've done at MIT. So 2017, I joined MIT, together with my current CTO, Matthias Lechner. Daniela Russo's lab, who is the director of MIT, CSL, and then who and another co-founder of ours, and Alexander, a mini who was a scientist at the time at MIT for co-founders.
00;04;39;01 - 00;05;07;28
Ramin Hasani
We started we continued our, our journey on scaling this type of liquid neural networks on, real world applications of robotics started with robotics. And then soon we scaled it into, robotics and autonomy. And then we scaled it into like modeling time series. Okay. So systems that can actually work really well on sequential data. Now, the type of data could be coming from sensors mounted on a robot.
00;05;08;01 - 00;05;31;17
Ramin Hasani
Could be video, audio, text, anything. And then we started seeing like promise on this type of technology, applying it in different domains. And that, that was the beginning, of, of how this thing became kind of liquid neural networks as kind of a thing that right now, like some some students of ours at MIT are doing their PhD in basically, which is kind of a new
00;05;33;05 - 00;05;38;00
Geoff Nielson
So they're sorry. They're doing their PhD in this. Working with this specific model.
00;05;38;01 - 00;05;39;20
Ramin Hasani
On these on. Yeah, exactly. Like
00;05;39;20 - 00;05;40;15
Geoff Nielson
Wow.
00;05;40;15 - 00;05;44;20
Ramin Hasani
of how how these things are and you know, like how can we extend
00;05;45;01 - 00;05;45;26
Geoff Nielson
Right.
00;05;45;26 - 00;05;58;12
Ramin Hasani
you know, like to build better and better AI systems specifically for real world applications, you know, and then the other system, it's it's a robotics lab. So it is like applications were always like centered around real world.
00;05;58;12 - 00;06;03;22
Ramin Hasani
And the impact that we could have on real world. And I was like, what, where, where things are today.
00;06;04;04 - 00;06;28;06
Geoff Nielson
That's. That's so amazing. And, I mean, my mind is still blown just from the, you know, as you said, this transparent worm that's that's informed this entirely different way of doing things. How does it, you know, when you compare this to a more traditional neural network? You know what's different about this one, aside from the fact that it's, you know, worm influenced and it sounds like, you know what, 12 or 19 neurons, in some ways it's a lot simpler of a model.
00;06;28;09 - 00;06;33;09
Geoff Nielson
What are the implications of that and what makes it, you know, a more attractive model to use?
00;06;33;15 - 00;06;42;18
Ramin Hasani
Yeah. So you see, like when when when we were when we were focusing on robots. We are talking about resource constrained environments,
00;06;44;04 - 00;06;51;20
Geoff Nielson
Right. Yeah.
00;06;51;20 - 00;07;05;04
Ramin Hasani
so what, what you try to do, you try to build an expressive system that fits into that small, like how can we maximize expressivity of a system, while, you know, like, like putting it on these resource constraints.
00;07;05;04 - 00;07;27;13
Ramin Hasani
So two objective functions you have, one is quality of a model plus efficiency of the model. Right. So we've been thinking about and that that's where biology is very useful. You know, it can give you that expressivity because that the models have been I say expressivity. But I mean they can match they can match data input output data better than the other kind of system.
00;07;27;13 - 00;07;47;09
Ramin Hasani
You know, a model, a system that can model data better. That means higher quality on the benchmarks that you would measure, or lower rate of error on the benchmarks that you measure. You train on a training set, and then you tested on a test set and then the performance on the test set, which is coming from the same distribution as the training set.
00;07;47;11 - 00;08;12;06
Ramin Hasani
It's it's it's basically shows you how well your model is generalizing how well your model is performing. Therefore the model is more expressive or not. Okay. That's like that's like just just like a top level view of what what is going on. But in reality. But when we saw these types of models, met kind of machine learning models was recurrent neural networks, okay.
00;08;12;12 - 00;08;36;18
Ramin Hasani
Models that have feedback mechanisms in there. Okay. So that's like one of the fundamental properties of our models. So we found like obviously like there is a category of feedback systems in machine learning which we call recurrent neural networks that are not just computing forward like input output. They receive an input, they have an internal state, they think, and then they basically generate an output.
00;08;36;22 - 00;09;00;24
Ramin Hasani
Right. So that becomes kind of a recurrent neural network okay. And this has been there for decades in the in the realm of kind of artificial intelligence. But this is liquid neural networks are an instantiation or like a type of recurrent neural network that are coming from, a more, mathematics that we, we use for describing physical processes.
00;09;00;24 - 00;09;26;17
Ramin Hasani
Okay. The mathematics that we use for describing physical processes, similar to how nervous system dynamics like for example, that's a physical process. The mathematics that we use is basically we use like differential equations as a tool to describe those things. So that became kind of a novelty. Bring those kind of continuous time systems that are, you know, like predicting the next step of a system, with, with the delta
00;09;28;09 - 00;09;36;15
Geoff Nielson
Yeah.
00;09;36;18 - 00;09;39;12
Geoff Nielson
Yeah.
00;09;39;12 - 00;09;56;26
Ramin Hasani
this is how we describe how we how f physical system make progress in time, you know, and that that complexity of how do you go from time to time to when you want to describe a behavior of a physical system? Okay, that kind of process is modeled by a differential equation.
00;09;56;26 - 00;10;22;19
Ramin Hasani
And we use that to describe behavior of neurons and synapses. And that became the basis of this new type of AI system. And as a result of that, some sort of new operators got added to the class of recurrent neural networks, and that those are kind of the liquid operators that got added. And now now in terms of like applicability, now, like when we talk about machine learning today, you were
00;10;25;16 - 00;10;26;20
Geoff Nielson
Yeah.
00;10;26;20 - 00;10;46;01
Ramin Hasani
have evolved into like generative I felt why? Because if you realize that the larger you make these models and the more you scale these models, the better they get right at the scale where I was talking to you, like 19 neurons, like just being able to not navigating a car or something. That's that's like a very simple kind of application, you know.
00;10;46;04 - 00;11;05;05
Ramin Hasani
But if you want to do some more sophisticated task like generative AI and do language modeling, you or you might have multimodal kind of, language model. When I say language, it could be video, audio and text. For example, as an input. If you want to get there, you cannot really do that at a warm level kind of handle.
00;11;05;06 - 00;11;23;13
Ramin Hasani
So you need to scale this thing to a much larger kind of instances. One of the harder is that we had to scale this technology for us was the fact that, in the academic domain, like, we, we've optimized this thing for small neural networks. And if you want to scale this mathematics, you're going to have a lot of troubles.
00;11;23;14 - 00;11;46;16
Ramin Hasani
The reason why, for example, we have, we have at, at CERN, you know, like in Switzerland, we had like the first versions of the supercomputer. Why the physicists use supercomputers for modeling the physical processes that happens, like for atoms and everything. So when you want to model physical processes at scale, you need supercomputers to actually do that kind of analysis.
00;11;46;22 - 00;12;16;29
Ramin Hasani
Why? Because the mathematics of differential equation based mathematics are very hard to run at scale. You know, the larger you make them, the more comes. Forget about like running them on very efficiently, you know. So we had to do another breakthrough on taking these operators that we have built an efficient version of them so that you can actually take them and, and really scale them into billions or maybe trillions of parameters, you know, and, that was a nature paper.
00;12;16;29 - 00;12;59;21
Ramin Hasani
That'd be nature machine intelligence paper we published in 2022. And, this was actually the start of the whole process. Why we thought about it, like building this company, you know, so 2022, after we published this paper in November, somebody wrote about like, this was us solving a fundamental problem of going from like a differential equation that didn't have a solution into the solution space of a differential equation that would allow us to kind of bypass the computational complexity of that of, of that set of mathematics into something that we could actually run very efficiently.
00;12;59;21 - 00;13;23;12
Ramin Hasani
And we could potentially scale these things for the first time. And, and that and then, you know, like a quantum magazine wrote something in, in, in, in January of 2023. And then my inbox was full of voices. Everybody was saying like, oh my God, this is a new type of neural network. They're so powerful, like we showed on benchmarks that are like, really like doing everything really well at this small scale.
00;13;23;14 - 00;13;40;15
Ramin Hasani
And now now we we have the potential to scale them now and then. Everybody was getting crazy that okay, so now we should take this technology. Why? What would happen if 19 neurons can drive a car? What would happen if you put like billions of these neurons next to each other, you know, and that became kind of the thesis of the company.
00;13;40;17 - 00;13;45;23
Ramin Hasani
March of 2023, four co-founders. We started from Danielle Ross's lab.
00;13;47;23 - 00;14;08;17
Geoff Nielson
It's an amazing story. And I'm curious for me because there's, like, Like there's so many different directions. As you said, you can take this thing. And I'm glad to hear you've got, you know, kind of PhD candidates and PhDs working on it. There's, you know, if I'm hearing you correctly, there's, like, everything you can do with 19 neurons, which it sounds like there's still quite a lot to unpack there.
00;14;08;23 - 00;14;18;06
Geoff Nielson
And then there's the scaling question. So, so I mean, where to from here? Is it both. Are you focusing on applications at both scales or is there, you know, an area you're focused more on?
00;14;18;13 - 00;14;46;23
Ramin Hasani
Yeah, definitely. That's a great question. So, the mission of our company right now is to really build very powerful AI systems like that. With the two objectives that I mentioned, nature had the same objectives. You built the most intelligent system, given resources, given scale that is available to you, that means like right now you're building foundation models, foundation models that can, you know, like these are models that are, you know, like general purpose.
00;14;46;25 - 00;15;05;28
Ramin Hasani
And they can they can do general tasks, they can communicate with humans with natural language in the form of text, audio and vision. So and these are the kind of systems that you're building, even if they can tackle signal. But we want to have like the human element like associated to it. So that's why like it is important to have that kind of functionality at it.
00;15;05;28 - 00;15;40;02
Ramin Hasani
So we identify ourselves as a foundation model company. The models that we're designing, we try to maximize their performance, while also like being mindful of the energy consumption that they would have, both on the training site and on the test time. Like after you obtain the models, you know, and this is possible because of the breakthroughs that we have done on the architecture side, on the learning algorithms side, on data curation side, because there's a lot of research goes into, I cannot tell you that you can just change the, transformers into a liquid neural networks.
00;15;40;02 - 00;16;00;26
Ramin Hasani
And while you're, you're just going to have like a, crazy, better AI system or something, you know, there is a whole whole game that we have to play. And the game that we have to play is basically there's an infrastructure game that you have to play. For example, you know, Transformers since, Google invented them and this became kind of the mainstream.
00;16;00;28 - 00;16;24;06
Ramin Hasani
Everybody, like there's tens of thousands of, repositories that contributed to building the infrastructure for scaling this kind of technology, but for liquid foundation models, which are kind of the products of this company right now. They're everything from the infrastructure. We had to build it from scratch ourselves. Right? Because this was a new technology and we didn't have that much of a developer effect.
00;16;24;11 - 00;16;42;18
Ramin Hasani
We had like some, some, some impact. But but I mean, it wasn't as much as, you know, like, Transformers. And because we haven't we haven't talked about this technology for a while now and then. So as I said, the two objective functions, that means at every scale we are going to build. What does that mean at every scale?
00;16;42;18 - 00;17;05;06
Ramin Hasani
That means, wherever possible, to host a foundation model. We want to have the best quality foundation model today. Below on the edge. Let's let's call it on the edge. When I when I call it edge. Edge could be a laptop. This edge could be a mobile phone. It could be a humanoid robot. It could be an autonomous car, and it could be a satellite.
00;17;05;13 - 00;17;39;12
Ramin Hasani
It could be a network point inside kind of an IoT device. Right. So in, in, in all these kind of hardware constrained places, you would be able to put a foundation model and intelligence kind of system in there. We want to have the best quality model at those kind of places today. What we managed to enable in one year and ten months life of the company was that at this scale of edge, we are very, confident that we have the best quality models, like at every scale, like you can put, liquid foundation models on a Raspberry Pi that
00;17;39;24 - 00;17;40;25
Geoff Nielson
Yeah.
00;17;40;25 - 00;18;12;05
Ramin Hasani
You can put them on a, on a, on a mobile phone and have like an offline interact. But with these models, because the idea of having efficiency in mind and being able to host something directly on the edge, it just allows you to have, everything done privately at the user's site. Right. And that that just that's already like a super value that you can provide to, to clients, you know, like the ideas of sovereignty of, of AI, you know, comes across like you want to own your own intelligence.
00;18;12;10 - 00;18;32;12
Ramin Hasani
This is where liquid I would, would, would help you to actually develop the best kind of models. And in the, in the cheapest possible form, this is the impact that you would have for a client and on the client side, and then the impact for an enterprise who purchases our, our, our, our license, access to our technology and models.
00;18;32;19 - 00;18;45;08
Ramin Hasani
The impact is that they can host foundation models for free. Why is it for free? Because you're not calling an API anymore. You're not hosting them in a data center. You're running
00;18;45;27 - 00;18;47;15
Geoff Nielson
It's all on the edge.
00;18;47;15 - 00;18;54;02
Ramin Hasani
And if it's running on the edge, that means like the only cost that you're burying is a battery of that device, you know, and that's
00;18;54;10 - 00;18;55;08
Geoff Nielson
Wow.
00;18;55;08 - 00;19;04;12
Ramin Hasani
So now that means, like, imagine if you can provide intelligence at the highest level on the device. This would be the form of, serving this
00;19;04;12 - 00;19;16;00
Ramin Hasani
model, for the first time, like solving some of the business challenges around generative AI as well, which is kind of the, the, the hosting cost of the foundation models as a, as a whole is very, very, high.
00;19;16;00 - 00;19;35;17
Ramin Hasani
Right. And what you want to do, you want to be able to, reduce that and, the ideal case scenarios that can be reduced to zero, you know, which would be like hosting it directly on a device. The thing is, again, the limit, the limit, the limits up to the point that you can actually enhance the intelligence of a system on a certain hardware constraint.
00;19;35;22 - 00;19;48;11
Ramin Hasani
But we believe that what liquid neural networks and liquid foundation models enables us to do is to put the maximum amount of intelligence on a certain amount of device with high confidence.
00;19;48;15 - 00;20;11;28
Geoff Nielson
Right. And we're kind of, you know, we're kind of sort that an abstract layer Ramin of how we use this, and it's so, you know, to me, it's so interesting. And I'm sure there's, you know, 10,000 use cases for this, but from that applicability perspective, you know, where are you seeing the most compelling use cases? Where are you seeing people kind of knocking on your door asking questions like, would this be applicable here?
00;20;11;28 - 00;20;18;16
Geoff Nielson
What what sorts of sectors? And you know, if you're if you're able to tell us what use cases specifically are the most compelling.
00;20;18;16 - 00;20;24;22
Ramin Hasani
Yeah, absolutely. Like on a phone, for example, if I think about Apple intelligence, you know,
00;20;25;06 - 00;20;30;17
Ramin Hasani
idea of Apple intelligence in the perfect world doesn't have a server model because
00;20;30;21 - 00;20;31;04
Geoff Nielson
Right.
00;20;31;04 - 00;20;50;09
Ramin Hasani
intelligence has two components. One is the UN device computation, like one small model and one large model that is on the on the cloud. Now, the more you can run the applications that you would care about for generative AI on the phone, it this could be, summarization through the text summarization, this could be under image understanding.
00;20;50;09 - 00;21;08;19
Ramin Hasani
You know, this could be document understanding. This could be translation composition. You know, like a lot of applications that you can do with generative AI on the edge, if you can, you know, lift all these kind of complexity to the edge model, you just you're winning, you know, because
00;21;09;19 - 00;21;12;11
Ramin Hasani
cost is just reduce, you know, like you can just hosted there.
00;21;12;19 - 00;21;29;29
Ramin Hasani
But but you need to have like a reliable models today. Small models are not that reliable to do that kind of jobs. Right. This is on the just generative AI point of view. The next wave of this thing is a genetic behavior. You know, a lot of like kind of discussions are going into agent flows and energetic behavior.
00;21;30;02 - 00;21;53;18
Ramin Hasani
What do you want a model to be able to do? You want your model to be able to push a button, you know, in a reliable way. There are like bunch of patterns that you want to do, like your models. You want to give it access to the button that it can select. For example, book my travels, book my calendar, kind of, you know, like put something on my calendar, ask it to do eventually you think about Jarvis,
00;21;54;29 - 00;21;56;03
Geoff Nielson
Yeah.
00;21;56;03 - 00;22;12;17
Ramin Hasani
the eventual kind of form of intelligence that you're thinking about. Like, you can you can do that. Like somehow with the cloud models, but can we do that in a private way? And how much of that kind of weight we can lift it to the edge, you see. And that's like that's kind of the balance of you're thinking about putting in place.
00;22;12;24 - 00;22;36;17
Ramin Hasani
And then you think about agent behavior. You want systems that can take action, one quality that you want to improve. And one thing that we are good at today is instruction. Following and being able to to to to do tasks, you know, reasoning tasks, you know. And when you when you want to do an agent behavior, you want to always have a constant loop of information coming back to your system.
00;22;36;17 - 00;22;56;02
Ramin Hasani
So system interacts with a user or environment and then information comes back. And based on that you might take an action. The action might not be optimal. You want to optimize that action on the goal. You know. And this becomes like ideas of test time compute and all those kind of matters that, you know, like all one series of models can, can enable.
00;22;56;09 - 00;23;29;01
Ramin Hasani
Those are the kind of things you can do that also with small scale models, you know, that you don't need necessarily like larger models, because not everybody on their, daily basis, they're solving kind of, you know, like the hardest mathematical problems in the world. You know, there are use cases that you can just solve. We that we do we this smaller kind of foundation model and so far, anything that wants to be lifted on a private way, you want to use agent workflows and you want to you want to have this thing in a and increase your profit margin for the use cases of generative AI and agent AI.
00;23;29;01 - 00;24;01;16
Ramin Hasani
This has been the discussion of our company sectors has been consumer electronics. Naturally robotics. It has been, financial services. It has been biotech because our technology is really good at really also understanding, you know, the, the, the, you know, something that we call a machine learning credit assignment, you know, having like a long sequence of data and performing like, what is what what is the relation of this sequence?
00;24;01;18 - 00;24;22;22
Ramin Hasani
In respect to each other. Like, like the DNA data, for example. Right. Like DNA is kind of a sequence of information. And you can process these things with our type of technology. Right? DNA is one instance. The other thing is like data modality, it's it's something that we we don't have an issue with. Like as long as you have sequential data, you know, that that would be time series data.
00;24;22;22 - 00;24;56;08
Ramin Hasani
So for finance we can building like kind of time series data plus language data. And then you can build kind of more complicated kind of, predictors, more complicated systems that can provide you financial advice, financial kind of, portfolio optimization. Like there's there's a lot of things that you can do with this thing. And you can also do fraud detection with some of our, our clients, like in the, in the, financial sector, like we were working on foundation models built by these liquid foundation models or elephants as opposed to GPT.
00;24;56;13 - 00;25;17;20
Ramin Hasani
So, you know, we are building kind of the the best, transaction, foundation model that you can build. So like, they can process transactional data that whatever customers have done in a sequence of kind of events that came in, mapped out with news that are coming in, and then you can try to find out anomalies that happens in this form.
00;25;17;20 - 00;25;48;06
Ramin Hasani
So you can do fraud detection based on these tracks. So the use case would be fraud detection right. The use cases can vary like in different industries. You know for robotics as you know. So you can do control you can do data generation for generative AI. They are really good at synthetic scenario generations. You can build simulators right where you can generate like synthetic scenarios to improve the quality of, a robotic system that is taking an action or doing a sophisticated unstructured task.
00;25;48;13 - 00;26;14;19
Ramin Hasani
This could be like helping your patients. This could be performing surgery. This could be like all sort of, a human plus a machine in the loop kind of interactions. You know, this is very generative. AI could be really, really helpful in the robotics space and, consumer electronics as a whole would be just the fact that can we bring the intelligence on the edge, you know, and that those are kind of the places that we have been active.
00;26;14;19 - 00;26;41;28
Geoff Nielson
I feel like. I feel like I could talk to you for, like, an hour. About any one of those specific use cases. There's just. It's amazing to hear about the breadth and, you know, the value created from each of them. I mean, are there any. Yeah. What are the the coolest use cases that, that, you know, you've, you've come across so far or maybe the ones that surprised you the most and you thought like, wow, when I, you know, when we first came up with that technology, I never thought it could do something like this.
00;26;42;06 - 00;26;47;09
Ramin Hasani
I think the most surprising one like so far, it is more on an exploration side of
00;26;47;27 - 00;26;54;18
Ramin Hasani
I mean, with some of our bio partners, you know, like, our company's headquarters is in Boston, and, you know, Boston is one of the biotech hubs, like,
00;26;54;18 - 00;26;55;08
Geoff Nielson
Sure.
00;26;55;08 - 00;27;06;28
Ramin Hasani
access to a lot of kind of biotech companies, we started talking to the biotech companies when we built the first version of our DNA foundation models building on top of our technology, and we did a one on one.
00;27;06;28 - 00;27;33;06
Ramin Hasani
Okay, so this DNA foundation model is just to tell you what they can do. They can process basically sequence of data, and they can get an instruction about like what that sequence like DNA sequences. And then they can generate for you a new sequence based on the information you provide it or desire that you would have for designing something for, for a DNA sequence that would turn into a protein.
00;27;33;09 - 00;27;43;08
Ramin Hasani
You can then fold this DNA sequence into a protein. And that protein could be a drug, you know, basically a drug discovery process. You can use this thing for drug
00;27;43;19 - 00;27;44;08
Geoff Nielson
Well.
00;27;44;08 - 00;28;10;25
Ramin Hasani
the early kind of things like, which is one of the coolest applications of these things in health care. Right. What I, what I observed, even a small model below 1 billion parameter model, we achieved kind of on the error rates, like when we look at 1 to 1 comparison to a GPT that is used for generate generation of high confidence proteins, that that for sure would fold into something meaningful and biologically meaningful.
00;28;10;28 - 00;28;51;00
Ramin Hasani
Recited for the first time, we can design new structures that are matching the biological proteins that are existing in the in in the real world, but with a different sequence of DNA sequence. And that was just opening a new opportunity for drug discovery. That type of proteins, protein structures, was coming out of a liquid foundation model as opposed to a GPT was like novel to the extent that was like practically you could take action on them, you know, like you can take these things and now test them for, for, for are they going to be like a drug candidate, like a candidate for the next generation of drug or something?
00;28;51;00 - 00;28;58;03
Ramin Hasani
You know, and that's like, I think it's one of the most fascinating thing that I have seen, you know, from generative AI of that small size, like, you know,
00;28;58;15 - 00;28;59;08
Geoff Nielson
Yeah.
00;28;59;08 - 00;29;13;18
Ramin Hasani
about GPT fours of the world being like, you know, a much larger in size, like trillions of parameters. You're talking about sub billion parameter model being able to generate proteins of high confidence that might actually turn into an actual drug.
00;29;13;21 - 00;29;19;20
Ramin Hasani
That was super fascinating for me. That's something that I haven't seen in in the past.
00;29;19;20 - 00;29;30;10
Geoff Nielson
And if I'm. If I'm understanding that correctly. We're not just talking about doing it faster. It sounds like it can help with the hypothesizing or the exploration itself is that. Is that fair?
00;29;30;10 - 00;29;39;03
Ramin Hasani
Yes, it is, because it's just it's just a better learning system. But I told you like the objective function is one is efficiency and the
00;29;39;03 - 00;29;39;14
Geoff Nielson
Yeah.
00;29;39;14 - 00;30;01;03
Ramin Hasani
is expressivity. You want to have a very expressive model. They are basically consistently outperform transformers. So in some sense like what we are thinking, what we're seeing actually in action is that there's a potential for a new wave of kind of AI systems building on top of this new foundation model, which is not a transformer system.
00;30;01;03 - 00;30;18;12
Ramin Hasani
And it's like kind of a liquid, foundation model for fems, basically. And that's, that's super exciting, you know, and are exploring like the horizontal game, as you see, like there's like so many places you can deploy this technology in. But at the same time, like you're a tiny startup. I mean, we are at 50 people
00;30;19;29 - 00;30;21;02
Geoff Nielson
Yeah.
00;30;21;02 - 00;30;39;22
Ramin Hasani
So we have to also have some sort of a focus at the moment, like you're working with some of the people in the consumer electronics and on the edge. And, as I mentioned, but but we are not stopping scaling these models because we are at the verge of we just recently, closed a financial round led by AMD.
00;30;39;25 - 00;30;57;21
Ramin Hasani
And, that round is going to allow us to really, like, scale this technology into regimes that were not, possible before. There's a lot of uncertainties. There are a lot of things, the questions that you have to answer, you know, we so far we have a scale. This models up to, less than 100 billion parameter models.
00;30;57;23 - 00;31;15;00
Ramin Hasani
And now we want to see what would happen if you scale them further than that. You know, do they scale like so far it looks promising. It looks like they're going to the right direction, but we won't. We have to see how much we can push the boundaries, because the larger you make them, the more capacity these models have for really like encapsulating like knowledge in there.
00;31;15;03 - 00;31;17;09
Ramin Hasani
And that's that's the place where we really.
00;31;17;14 - 00;31;43;18
Geoff Nielson
It's so cool. And from an efficiency perspective. I mean, as you get bigger and bigger, if you're able. And I mean, I don't know the scale exactly, but if it's ten times or 100 times more efficient, whatever the number is than the transformer model. I mean, I think, you know, many of us know that what compute is a huge issue now and just the, the sheer, you know, power and electricity required to execute this stuff at scale.
00;31;43;19 - 00;31;55;07
Geoff Nielson
It sounds like there's potentially some like like really cool gains to be made that, you know, they in some ways unbend the curve needed to make this, this whole technology work.
00;31;55;07 - 00;32;08;02
Ramin Hasani
on the development side of these models, when you want to from scratch, you want to build a liquid foundation model as opposed to a GPT is going to be ten x, more efficient to build something like this. So
00;32;08;02 - 00;32;08;17
Geoff Nielson
Yeah.
00;32;08;17 - 00;32;22;22
Ramin Hasani
being like because the computation happens in a linear scale like computation, it scales linearly with the amount of data that this system can process, as opposed to GPT, that computation happens, a quadratically.
00;32;22;28 - 00;32;24;15
Ramin Hasani
So they grow quadratically
00;32;25;12 - 00;32;47;13
Ramin Hasani
why they're really hard. Like there is something called a context links or a working memory for a foundation model. The context lengths of foundation models, like its memory, is such an important aspect of learning. Learning and memory are very intertwined with each other. You know, whenever you want to do something beyond humans, you know what you want to extend the working memory of a human, you know?
00;32;47;13 - 00;33;03;04
Ramin Hasani
And that's like where where things becomes interesting, you know? And that means like, millions of tokens, you know, can we really scale when you start scaling transformers into those regimes? They, they they're basically the computation is scales
00;33;03;04 - 00;33;03;23
Geoff Nielson
Yeah.
00;33;03;23 - 00;33;07;20
Ramin Hasani
quadratically. Our computation scales linearly.
00;33;07;27 - 00;33;08;16
Geoff Nielson
Wow.
00;33;08;16 - 00;33;16;12
Ramin Hasani
longer the information they process, the better the gap becomes between like ten x could become like a thousand x values.
00;33;16;12 - 00;33;37;04
Ramin Hasani
You know, like depending on the context, things that you want to use them. Now that that becomes like a fascinating fact under the reason why that's, that's that's possible is because of the form of computation being different. I told you at the very beginning that we discovered new operators to operate at, at an efficient way that that works with hardware constraints and stuff.
00;33;37;06 - 00;33;47;02
Ramin Hasani
And that's where our, the magic comes in, you know, like we have, like, operators that are mathematical operators that are.
00;33;47;05 - 00;33;47;16
Ramin Hasani
We have
00;33;49;06 - 00;34;26;26
Ramin Hasani
that are extremely, good at doing, efficient, efficient memory computation, you know, and they scale, like, really, really, nicely with the sequence lengths and the amount of data that they want to process that's on the deployment development side. On the deployment side, again, you can control how much, memory you want to have at the at the test time, you know, when you want to test the model again as the computation becomes larger, the amount of memory and the amount of computation that happens on a GPT circuit, like for memory, it scales linearly.
00;34;26;26 - 00;34;46;28
Ramin Hasani
You know, the more information they process GPT, they would they would have to accumulate that memory so that that kind of restricts their usability for a long period of time on a hardware constraint. And this is something that we overcome with the liquid foundation model, which is kind of lower at this at this kind of thing. And they scale linearly.
00;34;46;28 - 00;34;54;19
Ramin Hasani
And they and that's like one of the, one of the nice things about them. And I told you the scale would be like between 10 to 1000 times,
00;34;54;29 - 00;34;56;07
Geoff Nielson
Holy wow.
00;34;56;07 - 00;34;58;13
Ramin Hasani
on the memory side.
00;34;58;13 - 00;35;22;09
Geoff Nielson
And it. If I'm hearing you correctly, the bigger the scale, the bigger the delta like, the more impact there is in using this, the more efficient your model becomes, right? Like the delta, which which is which is so which is so crazy. And, you know, makes me very, very excited about, you know, what you're developing and speaks to the value of being able to to scale this thing out.
00;35;22;11 - 00;35;41;11
Geoff Nielson
With the organizations you're working with remain. Is this, like, kind of in production at this point? Is it explorative? Sorry, where are we? You know, is this ready for the market? Is it a little ways out? You know what? What should organizations who hear about this and they say, wow, like I'm dreaming up, you know, use cases for this?
00;35;41;14 - 00;35;43;02
Geoff Nielson
How far away is this?
00;35;43;27 - 00;36;09;10
Ramin Hasani
So early versions of these things are ready. So if you're testing it with a bunch of kind of enterprises, early adopters of the technology, where, you know, POCs are now getting completed, you know, so we are getting into the phase where the technology is going to be productionize, you know, and and as I told you, like one of the challenges that we had, like, we had to overcome and is still like one of those places that you're constantly kind of improving is infrastructure, right?
00;36;09;15 - 00;36;28;12
Ramin Hasani
You need to be able to bring like the quality of models that we develop are phenomenal. Now you need to have like a serving stack and a customization stack that the clients take, and they buy these basically software and they start like using this thing like, you know, in many different applications. So far it has been good.
00;36;28;12 - 00;37;07;08
Ramin Hasani
Like we have early versions of the products, like getting tested again with the early clients that you have in different sectors that I mentioned, like consumer electronics, a little bit of e-commerce as well. Like, and also, a financial services, and, but and biotech basically. So they're early adopters of the technology partnerships are getting built, you know, like to take this technology to the next, next stage, which we, we, we decided to do, go to market, like with the, focus on enterprises and not user facing, because we were not ready for the rapid kind of feedback from the clients.
00;37;07;08 - 00;37;26;19
Ramin Hasani
If we wanted to really have control over how we are building this thing and making sure that it actually can generate value, then once the value is proven with enterprises, we would also have like a consumer facing kind of game. At the moment, it's just we put our technology out there. You can test these models, you can test them on our own kind of playground or liquid AI.
00;37;26;22 - 00;37;47;06
Ramin Hasani
You can test the technology on labs. The perplexity that I saw on perplexity, you can actually test one of our models like we partner with them to host demos. And you can use our API on Lambda Labs, for example, that you can get access just these are like just places where we want to give people that kind of an early exposure to the technology, not just that you're not monetizing any of this.
00;37;47;06 - 00;37;55;08
Ramin Hasani
This is all like freely available to everybody to just test the technology. And then on the side, like what we are doing there, 100% focusing on enterprises right
00;38;01;02 - 00;38;23;01
Geoff Nielson
I love it, and it's such a it's such a cool model. And it's such. I don't know, it's such an interesting approach to say like, you know, you go play with it. Let's, let's figure it out. You know, at scale, what we can do with this broadly. There's one piece of the value. I mean, we haven't we haven't talked about yet here today that I've heard you talk about in the past, which is, you know, the white box, right.
00;38;23;05 - 00;38;33;20
Geoff Nielson
The explainability piece here, which I thought was super cool. Can you tell me a little bit about the philosophy behind that? Like, why is that important and what does it look like in practice with your model?
00;38;33;25 - 00;39;05;21
Ramin Hasani
Absolutely, absolutely. So there is a field in the in, in the electrical engineering. It's called control theory. Okay. Control theory as a name kind of implies. Is that how the theory of how we control things. Right. The way we designed cars, engine, you know, airplanes and everything, like, around us, every, every kind of machine that we built, stemmed from, like, these thing, you know, like, it's it's, it's, it's the most fascinating kind of field for building machines and engineering.
00;39;05;21 - 00;39;29;06
Ramin Hasani
You know, it's stemming from, from from this control theory. So control theory has a certain form of mathematics that allows you to design controllers. Okay. For, for systems like let's, let's say an autopilot off your, of an airplane, you know, it is designed to control that process, like to drive or that, fly that airplane or something, right?
00;39;29;11 - 00;39;53;24
Ramin Hasani
Autonomously. Now, the mathematics, if you have 200, years of kind of knowledge on how these mathematics work, you know, from control theory, obviously, you want to design safety critical systems using mathematics or maybe systems from the way that humans design engines, you know, like like we we want to have like full transparency into the design of systems.
00;39;53;24 - 00;40;27;26
Ramin Hasani
You know, the approach, the mathematics of liquid foundation models are informed by control theory, mathematics. So that would allow us to use those operators 200 years of kind of knowledge from the control theory to understand how foundation models come up with decisions. That means today, instead of just designing black boxes, full blown black boxes, which transformers are basically matrix multiplication systems and if you want to really understand them, you have to look at them like from you.
00;40;27;26 - 00;40;54;15
Ramin Hasani
You open them up, you look at like behavior of a certain neuron and you hope that, you know, you, you start like understanding a little bit of behavior in there because it's very ad hoc, the process of really understanding how these things work specifically at scale. It's very complicated, like to, to really understand these things. And traffic is actually doing a lot of work in the in the interpretability of GPT based models, like with their Claud models, like they have like a 25%.
00;40;54;15 - 00;41;15;01
Ramin Hasani
They're saying like Dario was saying like 25% of their organization is focused on interpretability and figuring out what happens inside that black box so they can open the black box. What we thought we thought that, okay, let's take like this first principle. Thinking of inherently the mathematics that we designed. Neural networks are something that is rooted in control theory.
00;41;15;01 - 00;41;38;13
Ramin Hasani
Therefore we have the tools to understand the systems and therefore like this kind of technology is really good for to be applied in the, in safety critical applications. So when you want to bring this kind of system, that's why I called it white box intelligence, because on the development side, I can pause my learning process when I when I'm actually training this model, at every instance of the training process, I can pause the system.
00;41;38;15 - 00;41;58;24
Ramin Hasani
I can take an instance of off my model, I can look at it. Every layer is doing a job of a control, like a controller of a control system. I can look at it from the mathematics of control systems and really understand how my system works. Now what I can do from this instance, I can direct the behavior of the system.
00;41;58;24 - 00;42;17;05
Ramin Hasani
So I have a lot more control over the design of the system that would be on the development side on the at the testing side. Also, like when you want to understand how a network came up with the decision, or if the network made a hallucination or made a made that it made some mistakes, how can we go and do root cause analysis?
00;42;17;05 - 00;42;22;27
Ramin Hasani
And that's something that you can enable with purely like first principle kind of way of thinking of control theory.
00;42;23;18 - 00;42;40;08
Geoff Nielson
And it's. You know what's so amazing to me about that, as you said, is just that. It's it's inherent in this model. Right? You don't have to go back in and retrofit it. A reverse engineer it and say, okay, how can we, you know, have all the smartest minds in the world break apart what we already did? It's right there to begin with.
00;42;40;08 - 00;42;58;18
Geoff Nielson
And I don't know if this is a coincidence or not, but it like the image I saw, is that transparent worm, right? Like it's just like the worm. You've got the model where you can see all the things that it's doing. So it's so it's super, super cool. I did want to ask you, you know, you talk about all the amazing work you're doing scaling this out.
00;42;58;21 - 00;43;14;28
Geoff Nielson
You know, it gets more valuable at scale. You know, in in the world according to remain, if things go your way and it grows the way you want it to, you know, where do you see this technology in 3 to 5 years in terms of applicability, in terms of what it can do? Do you have some sort of, you know, long term vision?
00;43;14;28 - 00;43;20;03
Geoff Nielson
I, I hesitate to call it a master plan, but, you know, some sort of view about where all of this is going.
00;43;20;10 - 00;43;37;22
Ramin Hasani
Well, absolutely. Great question. So I've been thinking about this a lot, you know, and even when I think about this, is that, like the in an ideal world, you want to have, the right type of intent. So what? One of the things that we started, as a slogan of our companies like Machine Learning done right,
00;43;38;15 - 00;43;51;28
Ramin Hasani
you're thinking about both, you know, like the base of AI systems should be something that gives you that power that ChatGPT is of the world are giving you, but at the same time, do not consume the entire power of of a country or
00;43;52;02 - 00;43;52;23
Geoff Nielson
Right.
00;43;52;23 - 00;43;53;29
Ramin Hasani
know, so you you want to
00;43;53;29 - 00;44;12;29
Ramin Hasani
do that in a sustainable way. So I feel like, like this approach is so fundamental and it comes from, comes from nature. You know, our nature gifted us like 13 billion years of kind of evolution where, you know, you've seen you've seen like how, a lot of a lot there's a lot to be discovered, like from nature.
00;44;12;29 - 00;44;15;01
Ramin Hasani
You know, this is just the tip of the iceberg is like we
00;44;15;01 - 00;44;15;18
Geoff Nielson
Yeah.
00;44;15;18 - 00;44;27;05
Ramin Hasani
like some element of this thing. And it enabled us to design, like, better learning systems, you know? No, not when I when I think about like, future AI systems, I can see this to become the platform, you know, to
00;44;27;09 - 00;44;27;25
Geoff Nielson
Yeah.
00;44;27;25 - 00;44;40;11
Ramin Hasani
be the base for AI systems of the future because you want more reliable as are moving towards, you know, agents behavior systems have machines that will give them control to take action in the real world.
00;44;40;13 - 00;45;01;14
Ramin Hasani
You want the technology to be trusted as well. You want the technology to be understandable. You want the technology to be 100% controlled by us. The only way that you can do that is that if the base of the intelligence is not anymore a black box, it's a white box, it's a system. You have a lot of control, and at the same time, you're not, spending that much, that much, energy to develop it.
00;45;01;17 - 00;45;08;08
Ramin Hasani
So when I see, like, in the future, I can near future where I see, like I can see like, liquid foundation models to be
00;45;11;03 - 00;45;18;27
Ramin Hasani
anything that we have and, and even ported porting them on satellites, you know, like on the as an edge device. I would consider them there.
00;45;18;29 - 00;45;40;13
Ramin Hasani
And that would be in the short term in the long run, we would have like a larger instances of these models. They could be hosted, but they would not consume that much energy by the same time, would give us, like, comfortably a really good, experience with, like, built, working with foundation models that are coming from nature for, from us, basically for us basically.
00;45;40;13 - 00;45;41;26
Ramin Hasani
So something like that.
00;45;42;06 - 00;46;05;09
Geoff Nielson
So, I mean, when I look at your technology and you've got effectiveness, you've got efficiency, you've got explainability, you know, is is there anything left for, you know, more these more traditional models to compete on. Like is there a world in your dream where we this just becomes the new platform for AI, like in a world where, you know, this is working well at scale?
00;46;05;12 - 00;46;11;21
Geoff Nielson
Do we even need all the other stuff that exists now, or could there be, could this be that this is the better way and it replaces all of that?
00;46;12;15 - 00;46;14;07
Ramin Hasani
I mean, there is that potential,
00;46;15;26 - 00;46;18;22
Ramin Hasani
at the scale we need to we need to unlock that scale to see
00;46;19;06 - 00;46;20;07
Geoff Nielson
Right.
00;46;20;07 - 00;46;28;15
Ramin Hasani
scale. But I would say, like, we can cohabit with the, with the, with the other models as well, because there's a lot of energy has already gone into like building this large
00;46;28;15 - 00;46;28;28
Geoff Nielson
Yeah.
00;46;28;28 - 00;46;33;02
Ramin Hasani
Let's, you know, for example, Claude is really good at coding. Okay,
00;46;33;20 - 00;46;57;26
Ramin Hasani
ChatGPT. Like, GPT is really good for general purpose kind of questions that you would ask. The model or one series are good models for reasoning. You know, like the base is something and, you know, like there's also like technologies like RAC, you know, retrieval, augmented generation and and so, so, so what I want to say is that, like there is always a space for, you know, specialized models like that.
00;46;57;29 - 00;47;11;16
Ramin Hasani
There is no one model, ruling them all, you know, like there's always space for everything. But then again, today, the places where you do have, resource constraints and you want to if you want to make money off of generative AI,
00;47;13;01 - 00;47;18;07
Ramin Hasani
that you want to use. Because you really you really have to work on the efficiency angle, by the way.
00;47;18;07 - 00;47;33;13
Ramin Hasani
So that being said, like there's a lot of groups and also like started looking into new architectures, new algorithms like really massively going after how transformers are, you know, like how can we make transformers more efficient? You know, like there is also like this belief
00;47;33;17 - 00;47;34;05
Geoff Nielson
Yeah.
00;47;34;05 - 00;47;40;20
Ramin Hasani
can Transformers are brilliant. You know why? Because they are doing unconstrained computation.
00;47;40;20 - 00;47;49;28
Ramin Hasani
That means unbiased computation. What does that mean? That means you don't put any biases into the architecture. You say matrix multiplication. That's like as much as you do. Like let's
00;47;50;18 - 00;48;00;05
Ramin Hasani
matrix multiplication. What can be more elegant than that? You know, it's it's a really nice thing. Like a very simplistic thing that you can scale it, which is very nice, but it has some shortcomings.
00;48;00;05 - 00;48;23;04
Ramin Hasani
And maybe nature has gifted us these operators that we are using, and those operators are absolutely necessary for building the next generation of, controllable and reliable AI systems, you know, and that's kind of the place where we want to go. The more reliability you expect from an AI system, the more you want to use a liquid foundation model as opposed to, a GPT, basically.
00;48;23;04 - 00;48;39;12
Geoff Nielson
Well. And that. That. That's exactly why this is so compelling to me. And I mean, as you said it, like, if you're looking to get, you know, ROI, if you're looking to, you know, make money off of, but I, it's like, I don't know, to me, it's a no brainer. I quickly wanted to ask you about quantum.
00;48;39;14 - 00;48;50;00
Geoff Nielson
We talked very briefly about it. As we scale. You know, you mentioned supercomputers. Is there a quantum play here? Is that something you're exploring? How how does that come into the future of this scale?
00;48;50;00 - 00;48;53;04
Ramin Hasani
are definitely looking at it as a as a field. You know, like,
00;48;53;29 - 00;49;11;26
Ramin Hasani
about our technology, but I can tell you like that form of computation is is absolutely, incredible. So I'm looking forward to what? When when it is practically kind of it's it's ready for us. Like, there is quantum inspirations that we can take and and you can do quantum inspired kind of machine learning, efforts.
00;49;11;26 - 00;49;36;17
Ramin Hasani
But but I would say the most important problem, I'll tell you one, like a story that I had like 2016 when IBM was showcasing, there was a conference, AI conference, in, in Barcelona. And this is the largest er, conference in the world called NeurIPS Neural Information Processing Systems. And, their IBM was showcasing their, their, their quantum computer, you know, like,
00;49;39;27 - 00;49;40;25
Ramin Hasani
that is hanging.
00;49;40;28 - 00;49;54;16
Ramin Hasani
And then I was like, so. And I asked the guy, so what do you think? Like, when can we have this, like commercially available to us? He told me, twen he told me 20, 35.
00;49;54;20 - 00;49;56;19
Geoff Nielson
Oh my gosh! Yeah.
00;49;56;19 - 00;50;09;16
Ramin Hasani
me. And I feel like, like this is kind of true. And I feel like, like this would be the time where, where, you know, like the true value of quantum, which would come out, you know, unless you have, like, massive breakthroughs, like coming, coming around.
00;50;09;16 - 00;50;25;02
Ramin Hasani
But I would say, like my timeline for being, having, like, a reliable quantum computers that we can use a commercially this would be like the timeline that I would, I would think about, but it is absolutely promising. And this is one of those places where scaling computation is not just a software game, it's a
00;50;25;10 - 00;50;25;25
Geoff Nielson
Yeah.
00;50;25;25 - 00;50;29;10
Ramin Hasani
it's like the the medium and it and the form factor as well.
00;50;29;10 - 00;50;35;21
Geoff Nielson
But you're not. You're not waiting for quantum. You're, you know, ten years out. Great. Maybe then. But you you've got work to do in the meantime.
00;50;35;21 - 00;50;54;04
Ramin Hasani
yes yes. So we we we we, we can work with the computers that are here. On the hardware side though, there are like ways to also specialized hardware for the type of software that we are designing. So far we have been adapting our technology to the existing hardware. But there are ways to also like co-design new hardware kind of systems.
00;50;54;04 - 00;50;58;16
Ramin Hasani
And that's that's another exciting area that I think in the near future we are going to explore is,
00;50;58;22 - 00;51;14;14
Geoff Nielson
So, I mean, I mean, this this is clearly an area of passion for you. And, you know, you talk about the, you know, the path you've taken over, you know, close to ten years to get here. You know, is this something you know, you had a passion for, you know, since you were a kid? Where did this come from?
00;51;14;14 - 00;51;20;29
Geoff Nielson
And, you know, could you ever in your wildest dreams have foreseen that, you know, this is where it was going to go?
00;51;21;03 - 00;51;39;17
Ramin Hasani
Not this way. So what I what I what I thought were things. I'm I'm a I, I can forget one of my superpowers. That I can focus so much. You know, I've. I've been a scientist all my life. And my question was always like. Like, you know, like I got interested in intelligence, you know, itself in the in the form of, like, understanding how intelligence works.
00;51;39;19 - 00;51;54;28
Ramin Hasani
The 20 that was the year, 2014. Okay. And and I thought that. Okay. So I'm on the verge of, like, becoming a scientist in this field where, where I would just go and crack basically some aspect of intelligence and win a Nobel
00;51;57;11 - 00;51;59;29
Geoff Nielson
No big deal. Yeah.
00;52;00;02 - 00;52;15;24
Ramin Hasani
And then 20, 22, when we solve that equation, like we solve this equation, I told you like it is nature machine intelligence paper that got us to the point where we could scale these models was an equation that I solved that didn't have, the mathematical form, didn't have a solution since 1907,
00;52;16;04 - 00;52;17;07
Geoff Nielson
Wow.
00;52;17;07 - 00;52;30;00
Ramin Hasani
equation that was describing the behavior of neurons, 1963 to English scientists like Hodgkin and Huxley actually cracked, like they describe the behavior of two neurons, like very, very kind of closely.
00;52;30;03 - 00;52;41;06
Ramin Hasani
And in 1953, they published a paper. In 1963 they won a Nobel Prize. Okay. My thing was that if I and this equation was known not to have a closed form solution yet,
00;52;42;07 - 00;52;50;18
Ramin Hasani
2022, I cracked it. And and basically, like we had this together with our teams at, at MIT. And I thought that, okay, so this is the path that I'm going to go.
00;52;50;18 - 00;52;57;17
Ramin Hasani
Right. But then you know, like the, the like shifting experience was that exposure to venture capital and also like, like to
00;52;57;21 - 00;52;58;13
Geoff Nielson
Yeah.
00;52;58;13 - 00;53;19;20
Ramin Hasani
are like in the real world. And then we thought that, okay, so maybe it goes beyond just thinking about just the scientific aspect of it. And we can actually take this technology and bring, bring value by, by scaling the technology properly, you know, and we deeply thought about this with the four co-founders of mind and one of them being like one of the greatest mentors of all time, like one of the robot, one of the greatest roboticist scientists
00;53;20;04 - 00;53;20;18
Geoff Nielson
Yeah.
00;53;20;18 - 00;53;23;01
Ramin Hasani
Daniela Roos, Professor Daniela Roos, she's like a
00;53;23;01 - 00;53;41;29
Ramin Hasani
phenomenal, I think, one of the top ten, most powerful scientists in the world, like, and, and I have been privileged and it has been a humbling experience, like really working with her and working with my colleagues. You know, I usually like to refer to my CTO as, like, the smartest man on the planet, you know, because he's I mean, I feel dumb
00;53;44;12 - 00;53;55;12
Ramin Hasani
tell you like Matthias is like absolutely incredible at the same feeling that I have to to Alexander, who is my other co-founder, and the rest of the people that we have at liquid AI, we have been fortunate to be
00;53;55;12 - 00;54;15;13
Ramin Hasani
able to attract the group of talented people into one space. And this group of people are just so phenomenal. And it's, as I said, like the joy of like every day interacting with people that are smarter than yourself. That's just something that you cannot replace it with anything else. Like if nothing happens and then we would, we would, fail as a venture like this.
00;54;15;13 - 00;54;20;27
Ramin Hasani
Just experience of working with these people has been like the most important pleasure that we had.
00;54;21;01 - 00;54;35;14
Geoff Nielson
So? So in your. In your kind of world of hopes and dreams is, you know, that scaling this in enterprises. You know the definition of success and Nobel Prize as a cherry on top or is Nobel Prize what you're going for? And everything else is the cherry on top.
00;54;35;18 - 00;54;42;14
Ramin Hasani
I don't know, like, I feel like like I felt like personally, internally, I thought that I did enough on the scientific side
00;54;43;06 - 00;54;46;03
Ramin Hasani
where I, when I crack that kind of, equation. And
00;54;46;07 - 00;54;46;24
Geoff Nielson
Yeah.
00;54;46;24 - 00;54;52;08
Ramin Hasani
became kind of the base of this company as we go forward. I thought that, okay, so this is this is good enough, you know, I
00;54;53;12 - 00;54;54;07
Geoff Nielson
Yeah.
00;54;54;07 - 00;54;57;14
Ramin Hasani
the future, good if it doesn't happen like the Nobel Prizes.
00;54;57;17 - 00;55;01;22
Ramin Hasani
But but but but but that's not something that I'm, I'm actually focused on
00;55;02;06 - 00;55;02;27
Geoff Nielson
Yeah.
00;55;02;27 - 00;55;14;12
Ramin Hasani
focus that, it's not enterprise play as well. So I want in the hand of every, every single, person in the world. So we want to bring value. We want to bring intelligence in the hand of people in a form that is possible.
00;55;14;12 - 00;55;35;09
Ramin Hasani
You know, this could be in the future if are not just having mobile phones and laptops in front of us, this would be glasses. This could be like other kind of devices that we would wear and, maybe internal even chips and stuff like you, you never know, like future is very interesting. But again, there's an the intelligence is always tied to the substrate you're hosting it on, you know,
00;55;36;02 - 00;55;40;12
Ramin Hasani
something that, that, that that interests me, you know, like from the business point of view as well.
00;55;40;12 - 00;55;57;23
Ramin Hasani
You know, like when we were thinking about the future of intelligence, I'm not just thinking about the largest kind of, form of intelligence that you can possibly put into the data center, but the one that actually is very, very intelligence. But you can put it in a, you know, in the hand of people, you know, I
00;55;58;00 - 00;55;58;22
Geoff Nielson
Yeah.
00;55;58;25 - 00;56;01;04
Ramin Hasani
That's like much more fascinating for me.
00;56;01;08 - 00;56;15;14
Geoff Nielson
Well that's that. That's another kind of echo of the animal kingdom too, right? There's the brain and the body, and you're looking at, you know, how it actually the right brain is on the right body. It's it's so cool. And we could I could talk for a long time about this. I know we're just about at time for me.
00;56;15;16 - 00;56;18;26
Geoff Nielson
Is there anything else you wanted to talk about today or,
00;56;18;26 - 00;56;25;07
Ramin Hasani
No, no, I think we covered, so much. Thank you so much for for having me. This was a pleasure chatting with you.
00;56;25;07 - 00;56;30;13
Geoff Nielson
Hey, the pleasure is all mine. Ramin I mean, thanks so much for joining today. This has been such an enlightening conversation.
00;56;36;01 - 00;56;40;17
Geoff Nielson
That guy's going to win a Nobel Prize. I think he's going to win a Nobel Prize. That's my prediction.



The Next Industrial Revolution is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Daniel Pink Discusses
Daniel Pink: How the Future of Work Is Changing
Today on Digital Disruption, we’re joined by Daniel Pink, New York Times best-selling author and workplace expert.
Our Guest Pau Garcia Discusses
Pau Garcia: How AI Is Connecting People With Their Pasts
Today on Digital Disruption, we’re joined by Pau Garcia, media designer and founder of Domestic Data Streamers.
Our Guest Ramin Hasani Discusses
CEO of Liquid AI Ramin Hasani Says a Worm Is Changing the Future of AI
Today on Digital Disruption, we're joined by Ramin Hasani, co-founder and CEO of Liquid AI and a machine learning Scientist.
Our Guest Conor Grennan Discusses
Chief AI Architect at NYU: How to Adopt AI Without Causing Chaos
Today on Digital Disruption, we’re joined by Conor Grennan, Chief AI Architect at NYU Stern School of Business and a best-selling author.