Spread the love

Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win DealsClick here and because we had such good response we have opened it up to make the eBook, Audiobook, and online course, more accessible by offering it all for only 5$


Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Want to ensure your privacy is protected? I sure do. Privacy is a human right and the folks at ExpressVPN make sure of that. Head over to ExpressVPN and sign up today to protect your safety and privacy across any device, anywhere.


Luis Ceze is a computer architect and co-founder and CEO of OctoML. I do research in the intersection of computer systems architecture, programming languages, machine learning and biology. 

OctoML is doing some very cool things about demecratizing ML and transforming how ML models are optimized and made secure for deployment. Luis shares a lot of great info on the foundations of ML, ethics of data, and how he builds a team.

Check out OctoML online at https://octoml.ai



TRANSCRIPT

Oh, yeah. Welcome, everybody, to the Disco Posse podcast. My name is Eric Wright and be your host. And this is a really fun episode if you’re digging machine learning then look no further.

You’re in for a great conversation. Before we get started, though, I want to make sure I give a huge shout out to all the great supporters and fans and friends of the show. This episode is brought to you by our favorite and good friends over at Veeam software.

This is everything you need for your data protection needs. I trust this company with my data, my identity. My goodness, whether it’s on the cloud, whether it’s on premises, whether it’s in using cloud native and the new stuff they’re doing with their recent purchase of a company called Kastin and Integration. Really cool stuff.

Whether you want to automate and orchestrate the entire kit from end to end for full business continuity and disaster recovery with Veeam availability orchestrator, you name it, Vimes got all sorts of goodness for you. If you want to go check it out and you can easily go to vee.am/discoposse and also let us know that you came from ol’ DiscoPosse’s podcast.

It’s kind of cool, but the Veeam family, it’s hard to say the Veeam family are extremely cool in that they’ve been great supporters. I love the platform. I love the team. And in fact, like if you actually go back in our archives, you can hear Danny Allen, who’s a fellow Canadian fellow cyclist and also a really fantastic human who’s the CTO over at Veeam. I was really lucky to have Danny on, but at any rate, go check it out.

Please do.

I definitely believe in their platform, in their product go to vee.am/discoposse

This is also brought to you by the four step guide to delivering extraordinary software demos that win deals.

This is something that I decided to build myself because what I found is that I’ve continuously involved in sort of sales processes and in listening to folks that are struggling with being able to connect with people, whether it’s in product marketing, product management, sales, technical sales. So what I did was I took all the lessons that I’ve captured myself and from my peers and compress them into a very easy to consume concise book. It’s called The Four Step Guide to delivering extraordinary software demos.

The seats you how to demo, how to listen, how to connect, how to engage, and ultimately how to get to Problem-Solving in the way you show your platforms supercool.

Plus there’s an audio book, a course and I do regular AMAs for folks that that buy the package. So go to velocity closing dotcom and you can actually download the whole kit right out of the gate today.

With that, we’re going to jump right into the episode. This is Luis Ceze, who’s a fantastic person who I was so happy. He’s the CEO and co-founder of Octo ML.

Not only have they got the really cool thing, they called the optimizer, which is a fantastic name for a product, but they’re doing some really neat stuff around democratizing and making highly performing an insecure machine, learning models.

Really, really cool. So check it out. Plus, the Beast talks a lot about building the business, the educational impact of where technology and is so much cool stuff.

Anyways, I hope you enjoy the show as much as I did. Hi, this is Luis. I am a co-founder and CEO at OctoML and a professor of computer science in Washington, and you’re listening to the DiscoPosse podcast.

You’re innocent till the days go by the. So this is fantastic. I do want to very quickly introduce you as your company is doing some really neat stuff.

And of course, I say this as a as a precursor to what you’re going to tell us or for the people that are listening. We hear ML/AI and it becomes like this wash that it’s assumed that it’s like, you know, they always say no one believes what’s actually going on.

I’ve dug in and I’m excited about what you and the team are doing. So I wanted to lay this of this.

You have really are solving a very genuine and interesting challenge. And I can’t wait to kind of figure out how you got to solve these problems.

So anyways, me take it away.

Let’s let’s introduce you to the audience and talk about where where you’re from and how you got to begin the OctoML story.

That sounds good. Yeah. So I have a technical background. So most of my you know, I guess intellectually active life has been in computer architecture, programing languages and compilers. You know, I’ve my speech to the University of Illinois. I spent time at IBM Research before then working on large scale supercomputers like bludging, you know, primarily applied to life sciences problems. And at the University of Washington, where I’ve been almost for 14 years now, it’s kind of crazy to think about my research career.

There has been focus on what we call the intersection of new applications, new kinds of hardware and everything in between, you know, copilots programing languages and and so on. In about five or six years ago or so, we started looking at the problem of, well, the opportunity. I would say that based on the observation that machine learning is getting popular super fast. Right. Because, you know, machine learning allows us to solve interesting problems for things that we don’t know how to write direct code for.

Like, for example, if you think about how you can write an algorithm to find cats in the photograph, it’s really hard to to write the direct code for that. But, you know, the machine learning allows us to infer a program, learn a model from data and examples. Right. So this proof has proven to be really powerful and machine learning is permeating every single application we use today. Right. So but anyway, so six years or so ago, we started thinking about, well, there’s a variety of machine learning models that people care about for computer vision, natural language processing, you know, all the time, series predictions and so on, and a variety of harder targets that you want to run these models to.

This includes CPU’s, GPS and then accelerators and FPGA and DSD and all sorts of compute engines that have been growing really fast. So you had this interesting cross product.

If you have lots of models and lots of hardware, and how do you actually get them to run? Well, where you need them to run, that includes the cloud, the edge, you know, implantable devices, you know, with smart cameras, all of these things. Right. So and one thing that’s interesting to note in this context about machine learning models as computer programs is that they’re very sensitive to performance and they’re very computer hungry, the memory hungry, their bandwidth hungry.

So they need lots of data. They need lots of compute, therefore, making them perform the way you want them to perform, to be able to run fast enough and or use, you know, a reasonable amount of energy when being executed requires quite a bit of tuning your performance. Right. So that means that if you look at the machine learning models are deployed today, they’re highly dependent on hardware vendor specific software stacks like Nvidia with their GPS has cooled down and included a Khuda stack.

You know, ARM has compute led, Intel has indicated and you know, and then software at the height of it is have their own software stack in general. So this is also not ideal because then that means that from somebody who wants to deploy machine learning models, they need to understand ahead of time where they’re going to deploy, how they can deploy and use some custom tools that typically aren’t super easy to use. And that might not even be a software stack for the hardware that you care about.

That works well. Right. So long story short, the research that we started with a version of six years ago was to try and create a common layer that maps the high level frameworks that people use. Think of the data scientists use, like Tensor Flow, PI talks and so on, or numpty and bridge that to a higher targets in an automatic way. So we don’t have to worry about how are you going to deploy it, create this clean, open uniform layer that automates the process of getting your models from data scientists to production?

Well, this seems like yeah, it seems like a good idea and people would agree. But there’s a lot of challenges there, right? Because the way machine learning models are applied to. They rely on hand tuned, low level optimizations of code that really means like understanding the model, understanding the hardware and tuning the low level codes to make sure that you make the most out of that hardware. Right. So that takes a tremendous amount of work.

It’s not sustainable. So the research question that we start exploring was, can we use machine learning to optimize that process? So essentially use machine learning to make machine learning faster on your chosen hardware. And that’s what that was the that was how the Tensor Virtual Machine project was born. So we started this project six years ago, five, six years ago. And fast forward to today. It’s top level Apache Foundation Software Foundation project called Apache TVM has been adopted by all of the major players in AML, including the Amazon, Microsoft, Facebook and so on.

It’s supported by all of the major hardware vendors. It is actually the de facto open standard for deploying models on a bunch of hybrid targets. That is open source, right? So, so armed, for example, adopted to VMD as the official software stack. So AMD is building with we talk to him about, you know, support for AMD CPU’s and chips on on Apache to VM and then other companies like Xilinx, which makes upgrades and a bunch of other nascent hardware companies are using Apache to VM as their preferred software stack.

And just one final sentence, and I know this has been going on, but I just thought know there’s no there’s no rapide way like this.

This is a super important understanding of how we got to even the start line, which is even before where we are today.

Right, right. Yeah. So anyways, and then TVM has been adopted both by end users and and hardware vendors. And the way to think about EVM in one sentence essentially is this compiler time system to form this common layer across all sorts of hardware. And think of it as a 21st century operating system for machine learning models that runs in all different hardware. Right. So that’s Apache to it has almost 500 contributors from all over the world, has been adopted, as I said, by all the major players in the industry.

And we formed talk to him about a year and a half ago to continue investing into PVM, all of the core people around Apache to VMware, co-founder of the company. So these are three PhDs in Washington. And another co-founder, Jason Knight, was head of software products. And Intel left until the time to join the company. So I’ll come out today. It’s about 40 people. We our mission is to build this machine learning acceleration platform to enable anyone that a very automatic way to get the models deployed in a hardware that they want without having to fiddle or with, you know, different software specs or having to tune low level code to deploy your model.

Really, we are about automation and democratizing access to efficient machine learning because the tools today require quite a bit of various skills. So. And I think that’s where we really want to begin, is that every, you know, abstractions are generally done because it allows for obviously diversity of platforms above and below the line.

Wherever that abstraction layer is, the appropriate abstraction is a fantastic place where platform begins.

Then even further up is how do you organize a commercial entity that can create an additional value.

Even beyond that is a is really amazing because, you know, especially in a niche area like this where you look at look at the folks that are contributing to TVM and to who are obviously well down the road, you know, people are thinking that smell is coming like it’s already here in America anyway.

But so beyond the abstraction now, there’s that optimization which makes it you know, we’ll talk about the optimal approach to it. Maybe give a sense of what does a non optimized machine learning model do relative to an optimized one, because I think this is it’s hard for people that don’t get it to understand this.

Yeah, great scale.

I love that question, Erica. So so the UN optimized version typically means that you have you get a machine learning model and you run it through, say, tensor flow defaults, deployment model or Piotrek, and you choose the CPU or GPU. And most of the time what you get is not deployment ready because it’s not fast enough or uses too much memory or doesn’t make the most of the hardware and so on, or you don’t get the throughput that you want.

Or if you’re deploying the cloud, it’s too expensive because it uses a lot of compute. So now if you run that through TVM and just what it will do is it gets that model and then generates an executable that’s tuned for that specific hybrid target that you that you’re going to deploy it into essentially generates custom code, uses its machine learning magic that we can get into if you want. But basically to find the best way of compiling your model into your hardware targets, to make the most out of your hardware resources.

And the performance gains can be anywhere from two or three acts all the way to 30, 40 X. Right. So we have so if you look at our conference, for example, we had a conference in December for the past three years. There are cases of folks showing that there were up to one hundred up to eighty five X better performance. And we talked about anything above 10 X.. It’s not a nice to have it’s an enabler. Like if you make something five 10x better, you enable something that wasn’t possible before because it’s just too slow or too costly.

And that’s the level of performance gain that we’re talking about here. So and this can translate into enabling anybody that before was too slow to deploy. Now you can deploy it. That reduces costs in the cloud because 10x faster means 10x cheaper to run in the cloud and so on.

When this it also helps to answer the myth. I would believe that there is a hardware specific machine learning unit.

Well, there are obviously hardware specific iterations.

Each model, each data set based on scale, size use, you know, there’s a lot of factors that even the most perfectly designed physical unit with a broad set of use, whatever is going to be, whatever the right combination of things, may not be appropriate for every model.

Right.

So this is an beyond this is not like there’s like a really good gaming laptop and a really good you know, machine learning is at any it doesn’t take long before you get to the scale of using machine learning before even a machine learning node is not optimized for your particular model.

Absolutely. As another way of saying that, too, is that even if you have fantastic cadre, you know, and numerous resources, if you don’t have good software to make use of it, you know, it’s just no good for you. The question is, you know, it takes quite a bit of work for you to massage your model to make the most out of a hardware target. Right. And it doesn’t mean that all heart attacks will be appropriate for all models, but by and large, it’s dependent on very fairly low level, sophisticated engineering required to get there.

So and that’s what we all about. Automating a doctorate now. So you.

You have me curious and I’m going to ask you to go down the rabbit hole right away. How do you possibly at a code level through software to models on the fly based on hardware?

This is like my I’m lighting up at the idea of, like, getting Texaco’s you need to because I would love for folks to really get a sense of.

Yeah. Where those challenge is being solved.

Great coup. Absolutely. So let me just start once upon a time now. I’m not going to be that long. No, but I mean, well, you know, fundamentally, machine learning models, by and large, are a sequence of linear algebra operations. Think of it as multiplying a multidimensional data structure, but not think of it as a matrix matrix, vector multiplication, matrix, matrix duplication. But sometimes more than two dimensions would be imagine a three dimensional matrix to call the tensor.

Right. So in general, like a generalization of that, it’s about another 10. It’s a lot of linear algebra operations. Right. So now these are very performance sensitive because they depend on how you lay out the data structure in memory because it affects your memory. How can your cash behavior that depends which instructions you’re going to use in your process. Because different processes are diffuse. They have different instructions that are more appropriate than others. Like instead of doing a scalar where you multiply one number by a single, the nobody could use a vector structure which which applies to vectors at a time.

And there’s so many ways there’s literally millions, potentially billions of ways of compiling the same program into the same hardware. But among the billions of possibilities, some of them are vastly faster than others. So what do you have to do is just search, right? So given a program that’s your your and now model and give the higher target and there are billions of ways in which you can compile those, how do you pick the fastest one? OK, so now to answer your question directly, how do we use an amount for that search?

Well, the brute force way and I’d say the less smart way of doing this would be to try all 10 orbiters of possibilities. But the problem there is that you don’t have time. Imagine making a variant of the code, compiling, running it, even just that, even if it takes a second each, you’re talking about centuries of computes to actually you talk about centuries worth of time to actually find what’s the best program. Where Emelle comes into play is that as part of how TVM operates, TVM starts up when you create a new harder target, you runs a bunch of little experiments that builds a machine learning model of how the hardware itself behaves.

So this machine learning model is then used to do a very fast search among all the possibilities in which you are going to compile your models how to target among all of those possibilities, which one is likely to be the fastest one? And that can be vastly faster. Think of it a hundred million times faster than trying each one of them. So now you enable this ability of navigating the space of configurations in ways in which you can optimize the model and then choose the best one.

OK, so now a machine learning model has a combination of these. So we just apply this subsequently of every layer of a model and then we compose and see how they compose and run it through the prediction. And then again, we validate like are we doing a good job in the way we do? That is by doing the full computation and running of performance tests and comparing are we doing better? Yes. And we keep the search going. Does that does that give you a general idea of how to do that?

It does.

And this is the. It’s the interesting challenge that we have with anything that’s any long running process, even if, like, just think of just traditional batch computing where where folks live, a massive long batch.

And at some point your you know, let’s just you know, for folks remember the days of the overnight jobs, right. So they’d have some four hour batch that would run. And you’re five hours in and something’s wrong. And there’s the difficulty of assessing.

If I stop now, optimize correct code, do something and then rerun, is it more worthwhile to do so versus just letting it run out? And it’s going to take twice as long as I expected. Like, that’s a relative number that I think a lot of folks would remember, even if it’s like a five minute script, if it takes five minutes and it should take 30 seconds, you know, makes sense.

But like the the scale at which you’re talking about, number one, to the initial problem of like where we’re going to go and use a model against a mass data set, it’s going to take.

Potentially hours, days, whatever is going to be significant. But then to run scenario, run that scenario. Repeatedly before triggering it effectively defined the most optimal place in which the host exactly is in fact identifiability to the right.

Yes, I think you could run those running, running parallel simulation modeling.

This is you anybody would think, oh, of course, where you’re going to use machine learning, like, well, you’ve now got an inception problem, right? Your effect, you have to do something that’s incredibly complex. To solve an even more complex problem, but. It seems untenable for people to imagine that because this could be done, so this is why this is this is like, yeah, it is how we do it, that we use this machine and others to make it.

So now we actually let me let me go and ask a question and ask her, like, what do we offer as a company? What is what is the commercial story here? Right. So anyone TVM is opensource, right? Anyone can just go to a package to PVM GitHub repo, download the code and run it. But Psyllium takes, you know, to set it up because he has to set up harder targets. And then you have to collect these machine learning models that predict how the harder behaves.

And and, you know, it is a sophisticated tool that it works really well. But you do it does it does require quite a bit of lifting to to get going in the context of of an end user. Right. Well, we did talk to a man who has built a platform called the OCTA Mizer, which is a fully hosted software as a service offering up to Veniam that automates the whole thing that has a really nice graphical user interface.

We can upload models, choose your harder targets, click the magic button, optimize, and then a few hours, maybe a day later, you get an executable just deeply optimized for your your higher target of choice. And the way that this is different than the experience using Cavium, as I said, it’s much easier. You don’t have to stall and do anything. There’s no no code required. And you just literally upload the model, choose the target and download it, or you can use an API.

But also the optimizer has no models and has pre a preset set of these hardware targets. Optimizer is built on machine learning that are ready to go for use, who don’t have to go and collect yourselves just because you can be protected from days using the optimizer when this is what I think is incredible.

And I spoke with somebody very recently and we had we just we’re just I’m enthralled with this idea of where we are today. And, you know, now at twenty twenty one, as we record this, it’s like.

The accessibility of both models and training data to like, if you wanted to try and get into the business of machine learning, just even to dabble with it, to get the hardware, to be able to have some data, to be able to the like the one or one layer of machine learning was very low level, like very simplified, and there was no access to go beyond and really test it. So now because like what you’ve got with the optimizer, like I said, you’re shipping stuff that’s there and it’s ready to go, so you don’t even need to then wear it like so like those first steps are incredibly challenging.

And and this is what I want to impress upon people that there’s effectively no reason why you wouldn’t just get started because it’s been done for you and it’s accessible to you now. Like, it’s it’s a wondrous time where we can do these things, because for all of the things that people are worried about, you know, one, I don’t understand complex mathematics. So how can I do with machine learning? Well, it’s not necessarily.

Not exactly. It’s about abstracting those away. Right. Right. And secondarily, how do I know how do I learn to trust what machine learning does? The only way to do it is to get in and see it. It’s a weird because machine learning has this really odd thing.

Even when we talk about A.I., sometimes it’s I describe it as like the scene from The Matrix when, you know, when when the Oracle system, you don’t worry about the face. And then he says, what? And he turns around, he knocks a base off the table and she says, what a really make your noodle is. If you would actually would have done that if I hadn’t told you about it.

And when we explained machine learning and like what you get, like you said, how do you find a picture of a cat? How do you tell the difference between a blueberry muffin and a Pomeranian? Like there’s all of these things where. There’s people don’t trust the outcome because they saw a meme about it one day, but you can dove in. You can test it out. You can put data through it. You can see Output’s.

It’s it’s there today because of what you and the team and what what the community is doing around this stuff, which is pretty amazing, right?

Yeah. And I want to pull that thread for just a minute on how the machine learning models, which, you know, there’s a whole sub field of machine learning, which is about explainable A.I. or excitable machine learning models to get people to trust and more. But I would even start by saying that how do we trust software to let’s forget about machine learning. Let’s just think about software the way we trusted by saying, like, we’ve put this much time testing it and you will have some confidence that it’s likely to work on the scenarios where your users care about.

We do not do formal verification of all software today. You don’t want to formally verify Excel or Oracle or Microsoft score. Basically, you just if you test it extensively and then you have confidence that it behaves the way you expect it to behave, and then you put a checkmark in any ship it machine learning models, they are that way too.

You have a training session, you have a test set, you train it, you test, and then you can do all sorts of ways of actually get more serious to the test. But, you know, that is going to work within the set of inputs that he was, you know, certified for, tested for. It works well, right. So, yes. So then you can go all into to this a huge fun discussion that we could have at some point.

Probably not. Now on how you explain to humans what is it in a machine that humans would trust them better.

Right. So and yeah. So that might involve compromising performance. It could be you might want to choose a model that’s not just as fast, but at least when you look at internally it works. You can explain to humans it might be useful for, say, medical diagnostics where you want a doctor to see, like, you know, this kind of like generally looks right to the decision tree. Here it is. Right, right. So then we can help with those cases, too, with integrating the optimizer, because if you choose to use a model that’s not as fast just because it’s more potentially trustworthy, we can help you recoup performance by giving you a highly optimized version.

And this is where, you know, I would say that the people that realize the difficulty that they’re facing, I’d like to get like, how do we get better at machine learning? You brought up the most perfect point. We don’t we just broadly trust software as if it’s like if it’s linear in its ability to scale. We were like, oh, I can almost run as fast as the machine. So they’ve we’ve just it kind of we grew up with it.

So we don’t distrust it as much as we we don’t necessarily trust it, but we don’t distrust it. Machine learning and quantum and the idea of being able to scale far beyond human capability. There’s this really odd. Case where the distrust is greater than the trust. And even though there’s no fun, I mean, this is I mean, effectively, it’s a lot of the core and the fundamentals of like behavioral psychology, you know, because of the way that we we place bets, the way we we think about, you know, outcomes versus efforts.

It is really funny or peculiar, I should say, you know, to see how people behave, but yet when they see the outcomes, like you said, they’d be like, oh, OK, now that’s fine.

I make sense, but. When you go one step further, which is especially the folks that are going to be, you know, customers and folks that you’re talking to. They’re further along where they know the risk of, you know, yeah, and the benefits outweigh. Yes. So they but the benefits to outweigh but the benefits outweigh the risks. Right, exactly. And also, I mean, trust the kind of stuff that it’s I guess it’s a kind of property kind of feeling that it takes a while to build, but it’s very easy to lose.

Right. So it takes a lot of work to build trust. And it means that investing, learning how you can live with it for a while and it works really well. But then you make a small change because models evolve fast and then that one breaks and he makes you lose some trust in it. But, you know, that’s just part of how it is.

And I feel like given the strides made in machine learning, research and getting models to be more trustworthy, more explainable together with all of the machine learning systems work, which is all we focus on in making these models perform and run well in the real world, I feel like very, very quickly going to we’re going to trust them just as much as we trust software and, you know, things that are really transformational to our lives, like self-driving cars, like automated diagnostics, like, you know, using A.I. designs, drugs and therapies and diagnostics is just such a special for us that the the progress that and the impact it has on human life is so far beyond the risks that he can cause, in my opinion.

You know, this may be philosophical, but I do think that in this case, the benefits far outweigh the risks.

So I’d be curious, especially because you’re obviously very close to it near you. You were you’re doing this in academia as well as in business. So you’re really tackling it on two streams, which is always amazing.

And I think that’s where a lot of the stuff comes from. But in fact, a lot of technology, amazing technology startups have been founded from academia and made their way into commercial business. And then those folks maybe get into venture capital. And it’s neat to see this progression.

But, you know, there are very few people that most people know.

And I wherever the descriptors of most or many, but who they could look to in and get that first understanding of the impact and importance of machine learning on society. Obviously, one that I know off the top of my head, of course, Cassi Korsakov.

She’s with Google and a fantastic person that truly does a lot to sort of share. The human side of of the value of machine learning and, you know, it’s neat to see those stories. So I’m curious, Lewis, who in your peer group and yourself included, like, how do you how do you get people involved and interested in the potential that we have as a society because of machine learning?

Yeah, great, great question. The way I think we get people interested and excited about is just by continuing to show the kind of problems that we can solve, the kind of new applications that we can build with with machine learning. Right. So let me let me take a recent example, seeing all of the progress going on on this large language models based on or three, for example. I mean, the ability the ability of summarizing text is fantastic with generating new tax is great to help you draft these technologies.

Just seem like magic. They work really, really well. And and I think that has the potential to amplify ability to understand large bodies of tasks of texts. Right.

So, for example, some of my colleagues and friends at AC2 here in Seattle had been working on these tools that help one understand how bodies of knowledge in a specific field. They’ve done this for covid recently, for example. I think it’s just really amazing applications that can capture the imagination and have a direct impact right now that really gets people more excited about it. I’m not sure that’s what you’re asking. No, I think it’s all about showing great.

But then, you know, just seeing the. So that’s one of them. The other one seeing the I know that we’re so far away from fully autonomous vehicles, but just seeing the kind of things that are ever more accessible electric vehicles from big ones like, for example, Tesla. I’ve that a model, a model three can do Real-Time Computer Vision and build a 3D model of the world around it. And you see, you know, the cars and people crossing the streets and then, you know, like this thing that is happening all the time.

It’s like, oh, this is a model. The car is actually agrees with them. Just as you get exposed to this, you get people more and more and more and realize how how exciting this is. So think about the applications that it enables.

And then a final one. It’s more academic. What’s becoming more top of mind today that I find particularly exciting and happens to be related to one of my personal intellectual passions of, you know, molecular biology and life sciences. I think that nature is a boundless source of two things, you know, mechanisms that we can use and molecules that can go and used to do beaches and things. And then second is all sorts of interesting problems that you can use a and the mouth to understand, you know, how nature works.

And it has tremendous impact on on understanding life and on understanding disease and understanding new therapies and so on. And there’s some things I think it’s fair to say that the strides that we’ve made in understanding, you know, gene regulatory networks and understanding, you know, a lot of life sciences processes would not have been possible without machine learning.

And right, so and. Yeah, so this has an incredible effect today, like, you know, how we can design a vaccine super fast, how can actually test it super fast? How can actually understanding do DNA sequencing of of of different people understanding? What is it? How did it correlate with things that you observed? I mean, this all boils down to it is enabled by conventional processes, largely based on machine learning.

And that’s one of the most you know, I don’t have the numbers handy, but I you know, I know it’s a good example to use, but as far as like the the the economies that we’ve achieved of time and scale is, you know, look at sequencing DNA, both the physical exertion required to do so on like hardware, the time and the cost in 20 years time or 10 years time even what it doesn’t take long to go back and see.

It was thousands of dollars in order to and and, you know, the amount of time required to do so versus now it is pennies on the dollar in effect relative to what the cost was not too many years ago, but.

Absolutely, yeah. And they should mention as one of the one of the research areas that I’m still active is on essentially using DNA for data storage, which would involve writing DNA and reading DNA sequencing. And this relied on on the progress of DNA. So I watch these trends very closely. And just to put numbers there, we’re really talking about, you know, the first human genome, the sequence. It was a huge landmark a couple of decades ago, actually cost over a billion dollars.

And today you can do a full you can do a full genome sequencing of, you know, under a thousand dollars, which is just you talking about a million faud, literally a million fold decrease, a million acts, decrease in cost. And then the amount of and this is all, by the way, enabled not only by no better understanding how, you know, it’s, of course, the genius idea of the next generation sequencing. But from there to today, a lot of it is really advances in computing infrastructure because it’s very complex, intensive advance in imaging technologies and optics.

Right. So and advances in machine learning, decoding very faint signals to read the letters that are in the DNA sequences, just. Yeah, all roads on the backs of Moore’s Law plus, you know, computing. That’s right.

Well, it’s interesting to see you as we come through. There’s a beautiful sort of readiness that’s arrived of all of these criteria. Right.

Like you said, you know, computational power, the understated scientific understanding, all of these things, they they move enough in effectively like a horse race beside each other.

And when one crosses the line, the rest cross very shortly after because one effectively carries the other. And there is this merger of things that has to occur to get then from there exponential increase in capabilities.

And we’ve seen so much recently and we as humans, we far overused the phrase exponential.

Right. People like us. And there’s a literally I talked with Joe back to you ahead of the founder of a company called Quant Gene.

And and he talked we talked a lot about that. And that’s but that’s their whole thing, is they’re using quantum computing and genome sequencing to find. Better ways to detect every kind of cancer, he says, but 10, 20 years ago, you would have a team of scientists and entire research area that’s focused solely on researching one type mapping, one type of cancer. And now, because of the ability in quantum computing, the ability we have in hardware, software and people and understanding, they can seek every possible type of cancer collectively through the research they’re doing.

And this is really like first principles like this is exponential growth in what we can do as an outcome because of the technology that we’ve enabled.

So what you’ve done and what you and the industry in your peer group and all of us are they’re doing is. Using first principles to do to set the stage for. An unlimited amount of new first principles thinking, going to do fantastic things, yeah, it’s a great point. And the way out outside this conversation back to what OCTA Mountain does is there are a lot of problems today or opportunities today, specifically in life sciences. For example, if you’re doing deep learning over genomic data that, you know, it’ll be without significant optimization would be beyond the reach of most people talking about problems that could literally take millions of dollars worth of compute cycles in in in cloud services.

If you could if you make that 50 X faster, the problem that takes billions of dollars cost in the tens of thousands of hundreds of dollars, which something that now becomes feasible and is also something that we’re very excited about this. You know, what we are doing is because not only do we make it more accessible to enable applications that we are doing today and make them faster and more responsive, but also the kind of optimization degree that we offer could enable things that would be beyond the reach of many today in application areas that are more custom, like, for example, what is life sciences when I think it’s one one great example.

So, yeah, and I think this is the fantastic opportunity that you have got now for your current and future customers is that it’s no longer about baseline achievement, but we can immediately begin to think of optimization versus that wasn’t accessible before. That just wasn’t it was just a matter of can we do it? And now it’s can we do this and are we doing it in the most effective and optimized manner.

Right. Yeah. And which which is often necessary like so to actually make it. Let me let me give you a without disclosing anything sensitive. You know, we’re being we’ve been working with customers that both deployed AML at scale and the edge and the cloud on the edge side. Think of it as if you had the machine learning model that helps you, that helps you extract help you understand the scene so you can replace objects in real time, say, for video chat, for example.

And then you have a you have that app running all sorts of events and devices like, know, different types of laptops, PCs or tablets or phones and so on. Once you have a model like that, what you have to do, what you have to do today to deploy it is by every single time you had to go and optimize and make sure that this is run fast enough on this and on on this ABC and any that different modeling that that is like, you know, it’s just really the unsurmountable of but not automating all of that, you know, which is what we do with the optimizer is something that is enabling, you know, the evolution of these applications.

And on the cloud side, you know, if you’re doing things like, you know, computer vision over large collections of of of images or video and to a large scale. So this could cost, you know, an incredible amount of money. If you don’t optimize writes, it means that you until you hits a certain cost target, you can’t you can’t even for companies that have deep pockets, that’s so significant what we’re talking about here.

So and it becomes the interesting conundrum of in order to test to see that your your model is effective and how long it’s going to take to run and what the optimization opportunities may be for it, you run it against your data set, but if you run it repeatedly against the same data set, it’s actually goes counter to the value that is dangerous if you continue to run like you’re not going to get expected results. And it may sort of skew some results if you send exactly the same data through exactly the same model over and over again.

Because they.

You do it again. Yeah, yeah, yeah. But usually it’s a wee.

So so that’s why effectively people are probably going to sort of throw up their hands and say, hey, you know, at least we know it works. We don’t know that it could run faster. So there was sort of an unfortunate acceptance up until, you know, what you’re bringing to the market that there just was it was just the cost of doing business in Emelle. Right. And that doesn’t need to be the case anymore, does it need to be.

Exactly. Does need to be the case. And these tools and he doesn’t need to be the case for as many possible users as far as as we possibly can. That’s why we strive for really easy to use and really making the level of abstraction much higher. So instead of you having to bear a super talented software engineer with a data scientist to go and do these things, going to be able to have the data scientists themselves to just go and use a tool that subsumes the need of having to work closely with this engineering team to deploy it.

Right. So, yeah.

Yeah. Well, yes, this is the thing of. We can now actually get positive business and societal outcomes instead of just technological outcomes.

I think one of my favorite things I remember Peter Tiel, he refers to he says we’ve we were trying to get Star Trek, but all we got was the Star Trek computer.

We didn’t get the tricorder. We didn’t get the transplant. We didn’t get the other things. All we got was the the computer that, you know, and and in fact, that’s the dangerous place to rely on. You know, we need to do things with these things. And this is why we are now at the point where we can really do amazing things.

Absolutely. And especially if you are a scientist. Right.

So I’m actually curious, Louise, what is a data scientists? Because I started to get different pictures of what that person is today, so if I’m an organization that I’m looking to hire a data scientist, what’s that profile of a person look like?

I’m curious in your experiences, given that you’re obviously very close to the field.

Yeah, no, that’s that’s a great question. Also, it’s a great question. And there’s just so many possibilities here. I tell it, say it really it really depends on what kind of problem we try to solve after data. Scientists tends to specialize in different kinds of data rights for different kinds of models. I would say that we should approach our, you know, see what kind of data you have to probably try to solve and go after.

Data scientist had zero domain experience because if you have some domain experience, you tend to get a lot better, you know, more predictive models and a lot better analysis out of the data that you that you have. What I think you actually focus people that say that the folks and people understand the problem domain and understand is the you know, the core tools in machine learning and data and analytics and statistics. Right. To go and work with your data now to go full circle.

Now, what I think is harder is trying to find a data scientist that can do that and also can do all of the complicated, ugly software tricks and they have to do to actually get get the model to or get the results to be usable as an end product. This is almost impossible to find somebody like that. This is why, you know, when we do, because somewhere early on in the life of the company, we’re doing some interviews to see what is it that we will be going after.

The number one pain points that we heard from folks that were running these things is that, well, you know, we have great data scientists and we’ve been doing better because the tools for the science are getting better and, you know, and there’s more. But now we have to go compare them was with very rare software engineering skills. And that’s what breaks the whole magic tear, because now you have the data and the data scientists just don’t have the rest of the resources to go and make their output be useful.

That’s where that’s where we started. Like, let’s just go to zero in on let’s automate the process of what gets out of the hands of data scientists and what should be the deployable module and get that gap and cover that with, you know, very sophisticated automation that uses machine learning. That’s really what the optimizer does. Right.

So first of all, my favorite name on Earth of a platform optimizer.

Sounds cool. I’m glad you like it. We love it. Yeah, the optimizer is definitely yeah. Every time I say like, it makes me smile. I’ve been saying this for over a year now and so I love. Thank you. Thank you for that Eric. So I hope I answered your question, but yeah. So how are you. Data scientist is I’m glad the tools are getting better, but it’s just so dependent on what kind of problem you want to solve that.

Yeah, it’s really about people understand the problem domain.

So it’s it’ll be interesting to see because I think we face right now as a society and businesses and governments is the sense that you’ve got to wait for the next.

We have to wait for the next batch of students to come up through the education system with access to the tools. So you have an eight to ten year cycle before people are actually able to do. And and in that amount of time, since we have so fundamentally change now, we don’t have to wait for that. We can we can train people in place. We can up level people where they’re at, through software, through technology, through capabilities.

It’s yeah, it’s an interesting point. I’m not sure if that’s where you’re going. And so it is a complete tangent here, but I think it’s fascinating to think about the role of A.I. and now in machine learning, it’s actually in educating humans. Right. So right. So there’s like ways of using AML to generate problem sets for kids to learn the ways of evaluating their kids. Yeah, so. And using that to actually train engineers. Right.

So the the potential for this stuff is just it is wondrous.

You know, there’s obviously there’s and I’ve talked with a few folks about some of the challenges around the ethics and biases.

And I and definitely I think what it’s superimportant, extremely important and tough on him.

I know I’ll ask you this kind of in your let me lean on your academia side, because I especially, as you put it, my Professor Hattab, my professor, had said you’re you’re very you probably that’s probably an area that where it gets dealt with or questioned the most. Is it through academia? We’re studying, you know, what are the potential like in business? It’s more like how do you, you know, broadly get this out in the world.

But we are finding through, you know, through thinking groups and through, you know, think tanks and through universities and the academia, like we are now at the study phase or continue to be when we’ll be for a long time in the study phase of.

How do we make sure? That we are as best as possible using these tools and this data. You know, it’s a real conundrum, because if it if it’s a representation of society, how much do we steer it in order to get what we hope to get out of it? Versus if a machine learning model gives you an output and we should there’s a reason it came up with that output we made up, trust it or understand it or maybe not like it, but it’s more like looking at how it got there than trying to, you know, stand at the output phase and then try and steer it towards a belief or an opinion, which is.

Yeah, well, this is a great question, super deep. And again, it could be a topic of a long conversation, but I would say that. Right. No, no, no.

But I’m I’m happy to offer some some thoughts here, because I do have colleagues and friends that I think about this for a good chunk of their waking hours, so. First of all, I mean, absolutely, we have to be mindful of biases in in machine learning, especially because of machine learning being dependent on training data. We need to make sure that the data is representative of a broad set of uses that’s actually equitable across all of the stakeholders in how this model is deployed and is aspects of the model architecture that should be training should be developed in the loop assuming.

And I think that comes fundamentally from having a diverse team. Right. So if you have a diverse team of people that are working on there’s a diverse engineering team or diverse team of of of data scientists as a team, that actually doing this naturally will point out deficiencies in training data and the architecture of the models. Just so with that as the people aspect here, that if we talk about machines doing more and more things, you have machines, you have people designing machines and designing engines that these people themselves need to be there.

Is this why I’m a firm believer of extremely diverse things? I’ve done that in academic teams that I’ve that I’ve built and know, pay a lot of attention to that at Octonal as well. That’s one thing.

And then the second thing is just through education. Right. So we have to keep bringing up these aspects of of of bias and make sure that he works for all the stakeholders, not just the machine learning, by the way, but in any engineering discipline. So there’s a friend of mine that once gave a talk and you to put his name here, but so about bias in machine learning. And he started with a great example. They might have heard that one of the very first photographic films that you heard that story before, photographic films that Kodak that essentially we’re talking about the chemical engineering thing like you designed the chemistry of our bodies and the way they designed the photosensitive material.

They realized that the way they were judging whether it was good or not was by, you know, checking this against a certain set of people with a different skin color, with a specific skin color. That means that if you actually use this in other skin color would just not work at all, which is not look right. And it was the case. So that means that they were they were biased. They set a great example that bias in how we evaluate whether something is ready or not for all the stakeholders is just not applied to machinery.

But any engineering discipline in this case, I thought was a really great one because it talked about something that, you know, on the order of a century old. Right. So it is just and then the way the tone of the film was and good for all of the callers. And it actually showed, you know, as one historical aspect of that. And that’s that’s true. You know, how you design you know how this affects building architecture.

There’s is it’s like a lot of things that humans use should have this thinking, not just machine learning. Right. Just that machine learning gets that extra aspect, because right now it’s enabling applications that it’s not a machine that gets extra attention on this because of how their applications is changing our lives super fast today, but also because so sensitive to data litigation is so fast that, you know, leads to a lot of misfortunes and, you know, let’s say missed opportunities to make it better early on in its.

There’s so much positive, but unfortunately, what will happen is the one the one negative story will be the one that becomes the focus. Quite often it’s like with anything. It was interesting. I was at a an event a couple of years ago says that it’s almost feel like it’s been that long since we’ve been at in-person events.

And it was a Canadian insurance company that had created their own their own call centers with EHI machine learning, all the stuff. And they basically fed it every single call that they’ve ever had taken with a customer service call and and trains this. And then they finally this was the moment where they said it as the to answer the next call. And it took the call and it dealt with with the person and they said it goes all the way through. And like, obviously they’re listening and monitoring like what’s let’s see how it behaves and it gets all the way to the end, solves the person’s problem in a perfect human sounding voice and gets all the way to the end.

And and this is the closing of the call. Then the machine says, is there anything else that I can help you with today?

And they said, yes, they stopped and looked at each other the like.

We that’s never been in a training manual. It’s never been there’s nothing that tells it to do that. But through all of the different calls and all the different ascertains that this was the best way. And they said then what was even funnier was the response. The person says, no, thank you, but I just want to thank you, especially because it’s so nice to talk to a human for a change right now.

I love that. Yeah, that’s it. That’s it. But this is the. There there’s going to be a beautiful call, like an augmented world, where we can leverage machine learning in these capabilities with like natural language processing and all these different things, we can use that like here.

But companies that are using it to detect, you know, emotional changes in people’s voices and they’re using effect to detect changes in their behaviors that, you know, could be for people that are at risk of suicide or there’s, you know, so there’s so many incredibly positive things.

And this is why, like I said, when so we have a friend in common, Amber Roland, who is, you know, you know, through your you she helps you with your PR and just fantastic human.

And she’s done a ton of stuff, you know, introduce me to the great people as well over time. And she’s like, every time I talk to her, it’s just like this, like, oh, yeah, here’s the human side. And she’s going to introduce me to people that are doing. Big things in when she said, I want you to talk to these folks, to talk to them. I had to race to the reply and say, I’m so glad that you did.

Yeah.

Now I know, of course, because like you mentioned, it’s tough. This is the tough part.

It’s hard to have hero customer stories because a lot of the customers you have, obviously, there’s going to be sensitivity and there’s and you’re ready, you know, in the in the birth of the company.

But you know what is maybe another quick example of a real human outcome that you’ve been able to see come to life. Well, yeah, great, so that we have several of them, right, so we. Let me let me if I can just pick up into, like, what kind of customers we work with today. Right. We have two categories of customers, one hour in which you learn the end users. These are companies that deploy that have products that use machine learning both on the edge and any of the cloud without getting into specifics.

I think of it as enabling much more natural user interfaces. I’d say that this is has, you know, a human outcome, because if you actually enable a new way of using voice, basing their faces in in very cheap, low, low end devices, you can buy them into more, more user scenarios and therefore have both add added convenience to people that are able and also add that ability to people that do not have, you know, that that are potentially disabled.

Right. So let’s say that is like a really nice outcome of just enabling more intelligence and intelligence. Think the edge is something that we have customers that we have enabled to do so. But customers are just machine learning and users and then also enabling hardware vendors that do not did not have a solid software stack to make their hardware useful for machine learning and then enable them to to to do so. But I’d say that in general, like the impact on human life, what we do is, again, one, enabling applications that weren’t possible before in terms of telling you the edge and also enabling these large scale compute problems that could be related to, say, life sciences, you know, that would not be accessible without the level optimizations that we provide.

So that’s how we got really proud of what we do in terms of the and, you know, and the impact in human life is we didn’t have any applications and even things that would be possible before. So.

Well, the the thing that I I try to remind people, too, is, you know, when we look at phases of of adoption and real life, if we look at sort of the hype lifecycle of so many things and we talk about edge computing for a long time and people still sort of struggle with what it means, but in effect, the the phone, you know, in a way, the phone you hold in your hand while it is a computer that’s stronger than the computer that since the first humans to the moon, it is an, in effect, an edge device.

Edge devices aren’t just raspberry pis that are glued to the side of a cell phone tower. They’re going to be computing. They’re distributed with different physical capabilities, different memory, different storage, different network, different CPU. And this is when. The ability to use decentralization, it will this is the again, exponential effect is that we can rather than taking collecting the data, they’re stream it back to central storage processing, essentially streaming it back the amount of bandwidth.

It’s it’s untenable. Right. And this is why being able to do processing and machine learning at the Edge is an amazing leap in. And what we need to do.

And this is what hammers home the value of what you’re doing, because there is no way that the model you’re going to run centrally is going to be run the same way at the edge of the hydra’s different.

Everything is. Yeah. So I love. Yeah, you said it exactly right. And I just said one more potentially overly dramatic point here, that which is speed of light is limited and light is fast, but you cannot make it faster. You know, if you had to actually have to go and you know, the speed of light is a limitation in in wireless, in any communication. Right. So not to justify this in any any communication.

So that means that some things fundamentally have to do at a very short physical distance to actually enable low latency and not having to rely on a long, long range infrastructure. This all of the hopes that he has to jump. So being able to compute and the edge has this fundamental enabling, like back by, you know, hard laws of physics that you must run this locally advisee continuous application. Right.

So, yeah, it also just enables low power, right? Yeah.

There’s this is the reason why people hate Bitcoin, not just because of most of the people that got in early and got rich was because of the physical impact it has on compute requirement.

And so there’s always this comparison of like, oh, you know, for every bitcoin you mine, it will basically you could power a city for a year or whatever it’s going to be had.

But this that is that’s a sort of a mythical historical thing. But beyond Bitcoin, when we look at. Yeah, using block chain, using machine learning all of these things to be able to do them on lower power, diverse hardware platforms. Yeah.

This is this is the Gutenberg Revolution of Machine Learning.

Wow. Thank you. All right. That was beautiful. Take the good. Agreed. Yeah.

And also to free people from having to even think about how they can deploy models because it’s just so that course like can even as I development to knowing how you going to be used. But how do you know. I mean there’s so many just think about mobile phones like, you know, there’s literally 200 different Android phones. So how are you going to tune for every single one of them right now? It’s just like this, a very small example when I just think about it came out as soon as he could run to the phone grid and a camera could run on a smart trained on a smart device and on the smartwatch and all of these things, just not having to worry about where it runs, could enable a whole wave of innovation.

Right.

So, yeah, this is so you must be excited. To be able to be both, you know, in academia and watching this world evolve and now you can very literally create the future through what you’re enabling at Octo Amelle, this is how good does it feel when you when you began this journey?

It’s got to be challenging.

And I say this like, obviously there’s no easy path to entrepreneurship and.

Yeah, well, thank you I for that question, because I used to present to Forsys how lucky I feel to have the team that we have. And I think that has one of the reasons that I think we have such a fantastic team is because of our connection to academia and the fact that we are a company that has a bottom line to to it’s you know, we have investors, we have customers have employees. And luckily, when we are in a very good position and that means that we’re not we’re not a research group.

Right. But we have a lot of we are really pushing the pushing the state of the art because we are a deep technology company. Right. So we are enabled by the fact that we had people with the products, that we build everything. But the fact that we actually had people that think on the frontiers of what’s possible with machine learning, like using machine learning to make machine learning better. And the connection to academia, I think is is really important and extremely synergistics and I would say essential to us because we are connected to the latest and greatest in machine learning models and the latest and greatest understanding of where even the hardware industry is going and what’s possible there, but also as a source of talents.

Right. So our company has incredible, incredible, incredible talent. We have more than a dozen PhDs in the team and a team of 40. Not that, you know, it’s just about that. But everyone is great. But I’m just saying that just showed the level that we are operating here in terms of pushing state of the art that we have a lot of people that, you know, operate like software engineers and making a product, but they all have a research mentality and research background and always think about how is it that how can I do something better than was done before?

Because that’s how a lot of folks have done research, you might think. Right.

So that’s and that’s very fortunate. Yeah. Yeah.

It’s always that tough metric when you like it. And I believe everyone should be proud especially to say, like, you know, we have a number of PhDs at it. At my own company. We have the same thing we talk about sometimes.

And it feels odd sometimes to say it depending on what the context. But the truth is, what you just said is that there are a group of people who chose to go above and beyond in order to advance something that had been done before that could be done better. And then when you bring a specialty machine learning of all the technologies and the things that we’re doing in the world right now. This needs those advantages for sure, thinkers who are willing to do what they did before and as a group, as a collective, and it’s also important that you don’t have one PhD because then having multiple.

Thinkers that way, people who’ve lived that life, they have the ability to use critical thinking as a group. To aim for the best outcome, not the right answer, the best outcome, and yet as humans, especially as entrepreneurs, we often get stuck with.

I’ve got the right answer and I’ve just got to teach the world that versus let’s as a group work with our customers in the community and the world and academia and come up with the the best outcome because it will be surpassed in future.

Absolutely absurd. Yeah, no, I love that that comment. And one thing I wanted to add there is, is that, you know, the path to impact and the time to impact the machine learning model in machine, any progress. And General is extremely short and a grand scheme of things. You’re talking about something that was in the academic world. People write papers about, you know, in January of a year could be by, you know, by by the end of the debate admitted that same you could be in production by people using it, like just this kind of like unheard of writing in terms of scientific disciplines, writing academic papers about and that having impact on people’s lives in new products within months.

Not not we’re not talking about years or decades, which is a typical thing that in a lot of disciplines you think about advances in life sciences, but at times it has an impact on diagnostics or or it’s just a long time like the future. Same thing in physics and chemistry. So I think many people I think generate for something that’s in production in March. You know that, right? So having this title of what the researchers and getting to see is really important.

And I think it’s a beautiful opportunity. Like, I love that people are coming because the dangerous thing is that if it only lives in academia and never makes it, if the same people that build, you know, take the concepts to the next level, don’t get a chance to actually be a part of the implementation of them, how do the how do we learn, you know, other than waiting for the next academic to come along and evaluate and analyze?

And like you said in the past, it would be a decade before you would see the results, you know, necessarily now that you can literally in academia work towards a goal, do you achieve your plan, evaluate, take the hypothesis, and now you can actually enact that hypothesis.

And as a commercial business, I think this is really, really cool.

Yeah, thank you. I completely agree. I couldn’t agree more.

So so, you know, one, before we close up, Lewis, I’d love to hear your thought. Eighteen year old Louis Sasi decided he was going to school, No one.

Did you imagine you were going to go to school as long as you did? When did you build your plan and when did that? When did when did today become part of that plan?

Well, that’s said you give me the goosebumps here. So just a quick personal company. I grew up in Brazil. I went to engineering school in Brazil when I was 18. I was an electrical engineering student at the University of Sao Paulo. You know, at that time, I, I definitely like I really liked research, was involved in some research, but honestly never thought that first I’ll become a professor, that is all. And then even though I would say that I had thought about starting companies at that time but never ended up not doing it because of those, I got into the academic world and research and, you know, left to Brazil to go to IBM Research to work on this machine that was working life science.

And after that I went to so that was very, you know, taking the next and the next opportunity. So where did that plan come together? I don’t think there was. I don’t think I was ever a point where the whole plan came together was I’ll follow, you know, the flow.

But I always had the North Star that what gets me up in the morning is intellectual excitement and working with people that I can learn from and admire. And, you know, academia is great for that. And at Oxford now, it’s been great for that, too, because, you know, it’s been a different dream to have this kind of team they were able to build here.

So I I hope that we find more losers in this world. You took kind of. Thank you. Well, thank you for the conversation. There’s been a lot of fun, and I hope to chat with you again.

So, yeah, absolutely. I’ll be excited to watch the growth of the team, the organization, your customer base here. Some of the stories. We’ll get caught up again in future.

All obviously of links down in the show, notes for folks that want to find where if they if folks want to contact you directly, Luis’, obviously they can go to Octoml.ai, can we have that? But what they want to reach out to you directly, what’s the best way to do so.

Yes, you can just go ahead Lewis at OK, time out. Right. I listen to him outright. You trust me and I’ll come back to you. Looking forward to hearing from your audience.

I also want to congratulate you and thank you for being an amazing intellectual who doesn’t use their university address when they run a company.

It’s I know there’s a beautiful pride in the Stanford edu or the University of Washington that it’s always amazed me to see someone use like a three year CEO of a company and they still use their university email as their contact.

And like you, you should be proud of everything is OK to email is the thing to be proud of everything.

You’ve got to take it down to be proud of. But thank you.

Yes. I’m very, very proud of our time out for sure. Yeah. This email address will be the only to now. We’ll be there for a long time. So this email address will be valid for a very, very long time. I’m very proud of it.

So judging by you and your team, I very firmly believe it will be so. Thank you very much for the time today, Louise.

Thank you. Thank you again, Eric. Wow, there was there’s a lot of fun.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.