Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win Deals - Click here and because we had such good response we have opened it up to make the eBook, Audiobook, and online course, more accessible by offering it all for only 5$


Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Want to ensure your privacy is protected? I sure do. Privacy is a human right and the folks at ExpressVPN make sure of that. Head over to ExpressVPN and sign up today to protect your safety and privacy across any device, anywhere.


Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing


Rob Telson is VP of Worldwide Sales and Marketing at BrainChip and brings over 20 years of sales expertise in licensing intellectual property and selling EDA technology. Rob has had success developing sales and support organizations at small, midsize, and large companies. 

We discuss the incredible capabilities of AI at the edge, the real definition of edge computing, and how that context creates interesting challenges for organizations and people.  The conversation also covers a ton of great insights into building effective sales and marketing teams, empowering people, plus where to find the best burgers in the world.

Check out BrainChip at their website here: https://brainchipinc.com/

Rob also hosts a great podcast here: https://anchor.fm/brainchipinc/episodes/Conversation-with-Vice-President-of-Worldwide-Sales-Rob-Telson-and-Alex-Divinsky-eua3lv 

Big thanks to Rob for a very fun and informative discussion.

Transcript powered by Happy Scribe

Rob, thank you very much. I was first of all, I get excited by being able to see a name that shows up, but as a human, essentially, that is very interesting to chat with. And I’ve been lucky as I’ve gone over some of your your own content and you’re your producer of content, your you’ve got a really solid voice and a great way of of really leading a discussion, which is cool.

And on top of that. I looked at BrainChip and it was pretty darned excited about the potential of what’s there. So before I jump in and I start discussion over the good stuff, I did want to for folks that are new to you, Rob, if you want to give a quick introduction and we’ll then we’ll talk about yourself. We’ll talk about brain ship, the Akeda platform, which is really cool, the technology and kind of what’s what’s being done with it.

And we’ll kind of run from there, if that’s all right.

Yeah. Well, Eric, thank you very much for for first of all, for having me on the podcast. And likewise, I’ve had the opportunity to listen to some of your podcasts and you’ve had some pretty talented people on. And so I’m actually excited and honored to be here. And I feel like I have some some big shoes to fill in the discussion. Right. But just real quickly, for the listeners out there, brain chips, the semiconductor manufacturer, and we’ve developed an artificial intelligence processor.

And it’s you know, it’s architected to function similar to a brain. And it’s it’s based off of continually learning as we go on the device that we’re implemented in. And it’s it’s processes without any dependency on the cloud. And that will become a lot of our discussion today. And and last of all, it’s it’s architected in a way that it consumes very little power or energy. And by doing that, it gives you a lot of flexibility and functionality to do a lot of things in the world of compute a lot, that’s becoming very common to all of us.

But it’s going to evolve over time. But before I talk more about. Brain chip and our processor, which is called Akeda, and for those listening, a ketamine’s spike and we’ll get to that in more detail I really want to talk about. The problem that we see happening and why al Qaeda is going to play a major role in solving a lot of potential challenges in technology moving forward. So, so so what what I want to bring up is three words that in my world mean the same thing, and there’s a lot of technologies out there that will give you but ifs, ands, buts and so on.

But the reality is we’re talking about the same thing. So let’s think of Iot based devices. And let’s think of edge based devices and let’s think of end point devices, they all mean basically the same thing, and that is we’re really, really far away from the cloud. We’re doing a lot of compute and in most cases, especially as we evolve. That computer goes to the cloud and then it goes from the cloud back to the device. OK, and I’m going to reference edge based devices and I’m going to reference instead of Khanum, I processers deep learning accelerator’s GPU impious.

I’m going to use the word engine, the engine that’s processing this information. So if we look at the evolution of edX and we look at the evolution of these devices over the next five years, we’re looking at hundreds of billions of devices and these hundreds of billions of devices are going to be demanding easy access of information to the cloud and back. And to put that into perspective, we’re looking at I don’t know, I think it’s forecasted there’s going to be 90 zettabytes of data going from edge based devices to the cloud and back.

So we all need to take a step to the side, put ourselves in the driver’s seat of our car and realize we’ve just hit a traffic jam. And what that means to anyone who has a smartphone right now or has a personal assistant voice, personal assistant at home, like an Alexa or let’s say using Siri, an Apple based device, I don’t know about you, but there are times when I’ve said Siri directions to this address and Siri says, I cannot help you right now or play.

Fleetwood Mac can’t help you right now. OK, so what’s going on there is that the device is trying to communicate to the cloud but doesn’t have access to it. And we’ve all just been accustomed to dealing with that for those that have experienced that and you just wait a couple of seconds or whatever and you move on. But let’s fast forward let’s fast forward to this evolution of EDG based devices and let’s think about vehicles. Let’s think about unmanned vehicles.

Let’s think about flying vehicles. Let’s think about drones. Let’s move into medical device applications. Let’s move into industrial applications. We’ve got an issue. Four doctors in a remote location trying to save someone’s life needed access to the Internet and they don’t have bandwidth to the cloud. The device needs to be able to do some processing on the device, and that’s where we see the key to making some massive impacts. The other thing that Qaeda can do, which gets really excited, is has the ability to learn on the device, we call that one shot training or one shot learning.

So you’re adding new images. You’re adding new functionality to the device as you go. This is the growth and evolution of artificial intelligence. And we’re we’re able to do that because we are architected. In an advanced way, using what’s called the neuromorphic computing architecture and neuromorphic computing is architected to function much like a brain. And what I mean by that is for our listeners and for you, Eric, right now your brain is moving and it’s consuming energy and it’s listening to everything I’m saying that is important to you.

But being a fan of yours, I would hope you have a cup of diabolical coffee sitting by your side. Right. And you can smell that fresh coffee, but your brain is not spending a lot of time on that. And it’s not spending a lot of time recognizing that your hands are resting and you’re touching something and your feet are touching something and or the scene. Exactly. So so what what going back to what I said earlier, Akeda is is, is the it’s it’s also references spike.

We focus on spiking or events and that’s what makes us extremely unique and extremely advantageous as we move into this next generation of A.I. is that, look, we can function and focus on all these different things that are going on, but we’re consuming all of our energy on the event that’s taking place right now. Their traditional engines, as I referenced before, it could mean a lot of different things. Those traditional engines have to process and compute all of this information.

They are burning. They’re consuming so much energy and power and they’re moving faster and faster and faster to solve the same problem that we’re going to solve. And using microwaves to milliwatts and using a lot less energy and and gives us a lot more flexibility. So I know I said a lot there, but that’s giving you a little bit about brain chip.

But and I’d say that’s actually the perfect that’s the perfect segway to something that’s very important about this. The challenge that we’re trying to solve, right. Is that it’s you know, it’s one thing for me to do yell into my phone like Siri, find the closest Tim Hortons or whatever. Sorry, I guess I should say Dunkin Donuts to finally close Dunkin Donuts. And and yeah, I’m sorry, I can’t help you right now. And like, darn it, Siri, like, that’s fine, I, I will I’ll open Google and I’ll say it’s got a slow connection and I can get through it.

It’s very different than system to system, which is where like you talk about, you know, rolling devices that are using Lydda and ultimately need to communicate and they need to do a lot of processing locally in because they have to have certain amounts of asynchronous communications and they cannot rely on every transaction being decision driving. So they have to be able to make decisions locally. So we do all this and it sounds fantastic. Irresistable, it’s easy. Why don’t you just move processing close to the edge, closer to the workload, whatever.

Great. Sounds great, right? It sounds as great as the numbers that we hear when we think about the scale that we’re we’re tackling. But then the first thing you think is how do I drive this like little of the power to do the stuff in the datacenter today or in the cloud, a fantastic amount of power required to run GPU and TP use and all these things when you move the edge. Now, this is the thing like you’re talking about milliwatt, representation of power usage, fundamental shift in the ability of technology to act in the way that what a kid is doing yet.

Not burn the planet down like Bitcoin miners doing this crazy bocking processing.

Yeah, it’s it’s it’s amazing. I’m going to bring up an example again of of a scenario. And I don’t want to get overdramatic, but I think we always have to visualize these types of scenarios. There are closer to us than we than we realize. But, you know, the proliferation of electric vehicles to proliferate, proliferation of vehicles and the amount of compute that’s taking place in the vehicles of today and tomorrow, the cars designed to do a lot of of great things.

And and I’m a geek, so I get really excited about functionality. Useless functionality usually gets me to buy something. So what happens, though, is that the if we don’t have the ability to have some of this compute in decision making on the device, and we’re going to find a scenario where you’re driving a car and that car sees a human and it knows what it needs to do is swerve hard. Right. And get out of the way.

Now, that hard right to get out of the way could actually end up causing another impact. You know, and. If it doesn’t if it’s dependent upon communicating with an external device, whether it be the cloud or something else, it’s going to make that hard, right? Whether it communicates or not, there is a human there and then you find out it’s a plastic bag. We didn’t need to make the hard right. So we need to advance and that’s what we’re looking at.

And the second thing that you what you brought up there in my head was spinning a mile a minute ago. OK, how do I want to respond here? But you start to take a look at devices that do consume a lot of energy. And one device that everybody has in their home is the refrigerator. It consumes a lot of energy and we we look at that as these consumable, these consumer white goods appliances in which al-Qaeda is going to help reduce the amount of energy that’s being consumed.

We look at it and we say, in the future, you’re going to want to have the ability to recognize by odor whether your food is is stale or it has a lifespan. How do we how do we do that and, you know, has that capability. One thing I didn’t mention that I’ll I’ll dovetail on is that we focus on five sensor modalities and two are very common in the world of processing today. And that’s image or object detection, very similar to the plastic bag and what’s out there and what’s going on.

And and the other one is, is voice or keywords. And again, something very common and a lot of the functionality that we have today. But there’s three other modalities that will become impactful in our everyday lives. And one is the ability to smell or olfactory. One is the ability to touch or vibration and which is some somatosensory. And then the other one is taste gustatory. And between all of those five sensory modalities, that’s where brain chip is kind of hung their hat and said, OK, well, if we can process vibration.

Then we can help in industrial applications and make things more efficient, we can save the infrastructure of the roads and bridges and we can actually help with prosthesis. And so people that that have prosthetics can now feel by having vibration detect somewhere else in their body. And we’ve now seen with the olfactory side of things and they always say it’s you hit these these points, inflection points by random. You design it all you want, but these random events happen.

You have what’s called Vox’s or volatile organic compounds, which are breath markers. And by these Vox’s you can detect different diseases. Illnesses, viruses and even different types of cancers. So with the right systems and the right engine, you can do a lot of good. You can start to make a lot of impact in areas that that we only dreamed of five years ago.

And I think that’s the important thing, too, when we think of any of any solution and what I love is enabling solutions in what you’re talking about, what what you and the team are tackling by bringing Akeda to the market is that. You know, the old was this show Halt and Catch Fire? My favorite interviews, watch that one is a famous sort of early episode. He says, you know, this is computers aren’t the thing. They’re the thing that gets you to the thing.

And so we look at what this does. It’s an incredible enabling function now is that, yeah, we can move to enabling prosthetics to be, you know, touch detecting. And it’s like it’s. This is the stuff like the reason why we went to the moon. And people say like, oh, well, there’s no one living on the moon, why did we go to the moon? And, you know, when we said, well, I’m not sure if you’re familiar, but most of the things in your house are were engineered and discovered as a result of research that we did in that we know why we call it a moon shot.

Yeah. And so now you have by giving this capability to the research community, to the industrial community. So as we go from Industry 4.0, which I didn’t realize we were already there to fight or whatever it is like, this is the huge, huge leaps that can be done by those communities and those creators and those researchers that are going to tackle, they said, big problems. This is this is like what quantum is a concept. You know, right, and it was it’s a very it’s so far out of the realm of most people’s mathematical understanding.

That it becomes like, oh, yeah, one day, but it’s. When you show this and you bring this and you show people what can be done now with this capability. This is a lot closer to the use cases that people understand. I think in then those sort of massive concepts like Elon get landing us on Mars and say, yeah, you know. It’s been. Tremendously satisfying on a daily basis, we’re constantly on calls with current customers and future customers and dreamers and visionaries and and.

I try to I’m going to take this conversation twofold, number one is I think it’s really, really important for people to understand in the world of artificial intelligence, it’s very complex. And a lot of people are addressing these challenges and problems and solving them a variety of different ways. So what I’m trying to do and is communicate that there’s an ecosystem out there and within that ecosystem you need. To have sensors that can detect what’s going on, you need to have a data set.

In order to understand what you’re trying to detect and you need to have an engine that can take all that information and actually accurately predict and accurately provide feedback. That you can depend on. So, again, we are the engine, but one of the things I brought up before when I talked about bosses and it’s something that we’ve had some success with, is is, for example, working with a couple of partners, but one in specific that it’s over in Israel, that I was able to develop some type of a breathalyzer that you can breathe into and it can capture information.

And the goal was to recognize different types of viruses or disease. And it just so happened that as they were building this product covered it up. So now let’s apply it to covid and we need an engine. If we get data sets, we get all this information, what engine is low power and operate quickly and provide us the accuracy. Right. And so we were hitting ninety three ninety four percent accuracy on these data sets to determine covid positive cova negative.

And that’s one example what we’re doing.

It’s amazing to think and like you talked about one shot and for anybody that wants to, you know, pause the show and go like understand what it means to be able to do zero shot, one shot and what like the impact in the way this does versus the like, mass data set, continuous learning it. So it’s a big, big change. And now to be able to bring one shot learning close to where it needs to be processed, because the other problem we’ve got is the data sets in order to do continuous training or potentially massive.

And while we’ve got five G in areas like oh five is going to here, that’s going to save the world. Well, no, you know what’s it’s going to happen with five G Instagram. That’s what’s going to happen. YouTube, Hulu. Like, that’s the the reason why 5G is going to be full by the time it gets here fully is because we are always pacing well ahead of the capabilities with the content. And now if we look at we’re effectively going right alongside your latest, you know, Hulu release their Game of Thrones or whatever craziness the people are streaming.

And you’re trying to do like legitimate real time neural processing. This is why being able to bring that processing and do it there, it’s it’s it’s a significant need. Yeah, it’s a huge boost in it, like I said, because right now the devices aren’t capable to do it without a significant amount of power or like these large processing units deemed to be local. So doing this at at a small scale with low power is game changing, to pardon the overused phrase.

Yeah, it’s exciting. And we’re we’re at a point where we have customers and, you know, our business models made up of the fact that we sell silicon and we’re in volume production right now and we also license our our IP. So they’re getting an encrypted IP that they can design into their system on a chip, and that functionality is on that device moving forward. And I think at a time like this, it’s this is this is huge. I mean, I have one of my colleagues says, Rob, you’re on this Jerry Maguire moment where I’m saying we have this semiconductor challenges, manufacturing and shortages.

And and there are a lot of great technologies in the sandbox, all trying to solve specific problems and broad problems. There are only a few of us that are licensing. The technology is IP. So for the others, they’re all scrambling to get wafer capacity and to get in line. And my Jerry Maguire moment is just made up of the fact that how great is it that some of these companies can incorporate our IP into their system on a chip, rely less on the supply chains and the challenges that will exist for a period of time and be able to to make a move as a as a market mover and have products with a lot of functionality on it.

So we we see that. And I’ll stop talking about my Jerry Maguire moment. But I do I do like to emphasize we have we have two two business models that we’re we’re leveraging and we’re seeing success in both of them. And the other thing is, is I look at it and we have two two paths. Right. One is, as you mentioned, there’s a lot of really exciting AI applications and some of them are more scientific than commercial focused.

And we have a unique crossroads where you want to address a lot of these scientific solutions because they’re really cool and, you know, you’re going to do something really good and beneficial. And we use that word beneficially. And I think that’s important. But also we have a business to run and and we are in the process of making sure that we have a commercialized product that all of our customers and future customers will have a lot of success with.

And I got to call something which is so thank you for using the best phrase on Earth, because, you know, the way you describe things here, you can tell that you’re very people focused even in understanding the technology. We talked to the like. Where we can go with how deep we can talk about attack, and first of all, you go far deeper than you would even give yourself credit to be able to. I’m sure we could go further, but.

You call it future customers, that may seem like a throwaway to some people, I worked with a lot of sales organizations and every time I hear the word prospect, I cringe a little. And it’s a difference in the way that you think about it, but by the way, you it’s not even a thing you fight to make sure you don’t say it’s natural. Like you just said, you know, we’re people with work today, future customers.

It’s a it’s I’ve got a huge respect. And so, you know, there’s. Thank you. Something something in you. You’re definitely your solution. And people focused. What what brought you to that? We’re going to dig into the rub portion of the show right now, because I want to unpack this because it’s actually very rare for a senior sales leader to not kind of. This slip up and say we were talking to a couple of prospects today and it seems like small thing and I have a massive respect, first of all, I should say this to I got to catch this with look, I’m not I’ve never, as they say, carried a bag for for an organization.

I’ve got a huge respect. Everyone’s in sales to some degree. I’ve been lucky that I don’t carry a quota for it. So I, I, I don’t live without my sales teams doing what they do. And I furnish them with the tools to do what they do. Right. But but you split this line where you have this and this. There’s something different. I figure out where that comes from.

All right, let’s dig into it. What do you want to know?

So what’s your what was your background? As I noticed that through your career, you’ve actually been pretty consistently in this account, executive sales track like you’ve you’ve always sort of come in there. So where did your what drew you to that? And then kind of what drew you to the way in which you approach it?

Well, you know. I’m going to go back far. You know, it sounds like it’s a short path, but when I just had my my birthday yesterday, it’s been a while. Well, thank you. So I went to school to become a lawyer and I came home. From my final interview to go to law school and I sat down with my parents and I explained to them. That, OK, I start in three months and dad, you’re going to owe X amount of dollars when all is said and done for me to go to law school and as a very matter of fact statement and my father looked at me and he said, oh, hold on a second.

And he said, I never said I’d pay for law school. You’re on your own. My recommendation is you go get a job. They’ll do something and then figure out whether you want to go. So the next day I went out, started interviewing for jobs and and I ended up getting a job. Selling, mailing and shipping systems and learning about warehouses, learning about packaging devices, and and just walking door to door in a suit and tie and talking to guys and warehouses and selling products, the only thing I realized was I was really good at it and I was living at home and I was making a lot of money.

And I’m like, wow, this is who needs law school, right? And and about that time, we were transitioning to two Windows based operating systems. And I had the one thing I really look back at when parents make investments in their children. My father was really into buying computers before. Computers were something that everyone could get their hands on. And he had chosen Apple. So we had Max, so I had learned how to work on a Mac OS, so by the time Windows was introduced, you know, it had a lot of the same feel.

So the company I was working for had introduced a Windows based operating shipping and mailing system and no one knew how to use it. No one knew how to operate, and I sat down and started tinkering with it, and before I knew it, I was part of a Fortune 500 company sitting with the CEO of the company, some in Puerto Rico, teaching him how to use the system. And and my my sales career kind of took off and immediately took me into the most random decision I’ve ever met in my life, which was to move from this company into Edde Electronic Design automation.

And I was I was I got a job with a very large Iida firm that brought two guys in that had no engineering background and said, one of you as a sales guy will learn how to navigate all the way to the upper echelons of senior executives. And one of you won’t. And within six months, most likely, one of you won’t be here. That wasn’t me, I was there and was able to have a lot of success, and then that led to start ups.

That’s when things in technology were starting to take off. The the Internet was becoming something of a thing. And this was back in the day when, you know, you went to startups and you tried to make some money. And I was very I was very fortunate. I use the the phrase I’d rather be lucky than good. I was really lucky to be at the right place at the right time, which led to intellectual property. And again, another startup then, which led to a corporate career of sales organizations evolving into large sales organizations, evolving into sales leadership and being a part of a phenomenal run with a great group of people at a company called ARM and doing that for a long time and running their sales for the Americas, as well as running their sales for manufacturing, which most of it was done over in Asia and then transitioned to the second phase.

I called the second phase of my life, which is I want to do another startup. I’ve done this before. I know what we’re doing and I know what I’m looking for. And it just so happened that an unmanned car was a customer customer of mine. And we’ve done business together. And that car is one of the co-founders of Branch It. And we were having a discussion and he said, Robin, be great. If you could join our company, we need to build a sales marketing organization.

And and I think you’d be phenomenal doing it, which led to. Yes, sure, why not? And here we are months later, and we’re starting to make an impact with things that we’re we’ve we’ve put in place all seem to be executing. So this this to me is a very, very exciting time for brain chip. And to be a part of this this cycle, this revolution is really cool. So that might be a little long, but I just went through.

But Hurford says, you know, it’s a beautiful story because it tells you. You know, when when we choose a place to be. One of the most fundamental parts of the why is because we go there to there’s people that we care enough about to give ourselves to their their life via the startup, like the amount of time we spend supporting each other and support in the organization and, you know, the sacrifice of families. It’s it’s it’s no insignificant, you know, step to move to the startup landscape.

And especially, you know, you went. What was effectively the Oggi startups, right? This is, you know, startups now, you know, I’ve got I’ve got four startups that are running right now. I mean, there are startups. I’ve got a coffee business. I’ve got an affiliate website. I’ve got a podcast. I work at a startup which is not really a startup because it’s a 12 year old company that was just acquired by IBM, you know.

OK, so but you there was no other option like this was there was Fairchild Semiconductor. And it was like the crazy hero story of of how Silicon Valley truly became Silicon Valley. Right. But the story is that we we don’t here are how many did try and get started. And they’re very dominantly in the manufacturing and physical stuff. There was no eight of us. There was no like you could just whip startup at a Starbucks. It’s like so the leap of faith was even more faithful.

In the first one that you did it. And then now so. Are you, as they call it, unemployable, right? You can’t go back to the big company again. It’s very it’s always a very interesting thing is once you hit a certain phase of growth of an organization quite often. You look back and you say, the biggest impact I feel like I’ve had was, as it turns out, growing this company from one million to 10 million or whatever.

And I find a lot of. Folks, they really are like kind of like a SWAT team for a growth phase. You know, it’s but you’ve spanned every phase through different ways in which you’ve you’ve been in the industry. So I’m curious if you’ve got a. Kind of a favorite assize or favorite thing about building the team and building the organization.

It’s funny, I’m reflecting on and I was writing an email earlier today to my current team. We have some really young guys, very talented individuals that that have a lot of development to do. But I have some pretty senior guys that have actually worked for me for a while in different environments. And I was reflecting on a comment by one of the young guys who was was just. Talking about the camaraderie of the team that we’ve built and and how we’re there for each other and and the fact that he was highlighting this this level of comfort and excitement every day, it’s almost like a family.

And I think what I’ve been able to do. I don’t know why there’s no there’s no magic recipe, but it’s it’s a team, it’s it’s an organization and it’s to get everybody in trusting each other and being there and then being able to pick up the slack when we need to pick up the slack, being able to to realize what we’ve just done is not good enough. We have to do better and have that mindset to win. And so that’s carried on with me from very young organizations, with a few people to very large organizations.

And I think that’s part of the fun. And so the other thing, what I at this point in my career, what I really, really enjoy is having the frank and candid conversations almost like a mentor. And there’s a fine line when you’re managing people and working with them and mentoring them about what could be some personal dynamics and so on. But when I look back at people that I’ve worked for me and worked with me and I’d rather say worked with me than for me, we still keep in touch.

We communicate a lot. And there’s a lot of frank dialog about what would you do in this situation or and it doesn’t have to be business related. It can be life as well. And to me, that’s very fulfilling, more like a friendship. And I think that’s when you look back at life, it’s it’s one of the things you have to look at. How are you measured? What did you do? And I I had the opportunity to to do an executive management program called the Program for Leadership Development through Harvard Business School.

And one of the most dynamic individuals of his time was Clayton Christensen. Oh, yeah.

Who lost? Yeah, no, actually, I guess, yeah.

And I had the opportunity to to spend a lot of time with him. For those who don’t know, Clayton Christensen is the author of Innovator’s Dilemma. But then he had a follow on book after he got the he had a stroke and it was called Your Life and How Will You Be Measured? And so as we were all evolving young and aggressive and wanting to capture the world, he was on the other side saying, does it really matter? I mean, what about your family?

What about how what Mark are you leaving? And so when you ask the question, all right, what do you prefer? You know, I know we walk into an environment and determine understand the environment, large or small, and that’s the team. Let’s go. Let’s go be effective. And one of the things that I’m really steadfast on right now is that I have my children that are I have teenagers and I have I know when they’re not teenagers in their 20s and but they’re working and they’re grinding and they’re learning.

And one of the things I think is important that they see some of that same work ethic from me to an extent. I also want to spend a lot of time with them. But I think that they they start to realize, like this podcast, my daughter and her friends will listen to this podcast. So I just referenced my daughter. So, you know, so so from that. And I think that’s that’s what I’m when I look at make my mark, I want to make my mark and and for my friends and my family and for them to to see that this is this is life.

This is what you do. You get up every day and you give it the best that you got.

It’s I often say the I will my greatest accomplishment in life will not be what I achieved, but what I helped someone else achieve.

Yeah, exactly.

And it’s it sounds whenever I say it or remember, you know, write down some of these things in the way that I do a mentoring program and I’ve helped, you know, I’ve been mentored as well and continue. It’s a continuous process. Right. And it’s funny, I started thinking, like, oh, if I start writing this stuff down, it’s really cool to be able to share. But then I’m going to end up being like, you know, what do they call it?

The fortune cookie Twitter voices who just go on there and like they write threads about, you know, I’ve learned five. I’ve talked to five hundred CEOs and this is what I’ve learned thread, you know.

Right. Right, right.

But the amazing thing is we do now have the ability to have a much greater and continuous and direct impact on people because of the way we can communicate differently than like you, you know, by the rarity of the opportunity in your choice to take it on. Got to meet one of the most fantastic sort of leaders and creators. You know, Clayton Christensen is is widely read and widely respected. And, you know, I have a friend of mine and we I was joking about something.

I said. I said I’m like I’m like the guy from a beautiful mind, but a technical marketing. And he’s just like I says, I went I went to school and John Nash was one of my professors. I’m like, oh, wow. You know, it’s. The opportunity to be there is different, but the other problem with abundance is also the ability to find focus. And right now, you know, it’s like when you look at your wall of books and you see, like, I’ve got two hundred, you know, books and effectively they’re all the same, the same middle chapters and just different intro and outro for a lot of business books or whatever.

But I still read them all and I still like. But most people, you know, it they look, they’ve got the Internet, they’ve got YouTube, they got Masterclass. They’ve got this. They’ve got school. We have so much available, but then we don’t leverage it. Like, we don’t listen, like, you know, I, I, I read everything that my dad ever wrote, right. And like, I sat with him as a kid building system, that’s how I got into this stuff, is because, like, we he was learning at night.

And so. Yeah, like, well, that’s going to learn. I’m going to sit beside him, you know, and as long as it doesn’t bother him, you know. And then what happened, I was I was 12 years old, learning DBAs, you know, and then I got a job at 15 years old. Right. Doing data entry because it was on a on a DBAs three system. And I was the only 15 year old kid in anywhere.

I lived in a small like farm town. And I was the only person other than my dad in that town, that new DBAs. And so it was just so funny that that’s how I got the job. And then I got into, like, system development and, you know, diverged, you know, later on for years. But, you know, and then came back to technology. But again, it’s like the reason I say this is.

Just hearing that, like your your kids will take the time to listen to what you said, not from your mouth, but like they’ll go back and like, hey, dad was on a podcast. Let’s check it out. Yeah, exactly. That’s an impact. That’s a profound impact.

It’s funny. One of the things I was excited about on this podcast was the fact that my youngest son, as we hit the pandemic, you know, naturally was trying to keep himself busy. And he really you’re going to laugh in a second. He really got into coffee. OK, I’m going with this. And so, you know, as a dad, I’m like, all right, let’s get into coffee. I wasn’t even drinking coffee at the time.

And we got we got a coffee machine and we got up. And then and then from the coffee machine, we decided to up the ante a little bit and we got a grinder and we were buying beans from little boutique shops. Right. And we were tasting a little bit. And then I got into coffee. Now I’m into coffee and I got a machine that grinds and it brews and I froth my milk. And and then we were buying beans from four or five different outlets all over the country and shipping it.

And again, this was how we were entertaining ourselves. And he became the barista of the family. And he wasn’t drinking it, but he was making it. And I would say, I’ll take a mocha. And my wife would say, I’ll take a vanilla latte. And then my other son said, I’ll take Americano. And all of a sudden the barista was just in action and we’d have family over and everyone say, make us coffee. And so fast forward, he’s not in a coffee anymore.

He’s done with coffee, doesn’t want anything to do with it. And I’m stuck in my coffee machine.

And you’ve lost your barista no more. I’ve got a double half caf Rob yelling across the kitchen, so I’m making my own coffee and, you know, but he’s still over my shoulder every once in a while, giving me a little bit of advice and and pulling up stuff on YouTube. So I will give your coffee a plug. I am going to order it and put it up against some of the other guys that I’ve I’ve I’ve started to build some relationships with.

And we’ll see how it goes. And that leads back to Akito, a smart coffee makers and and what we’ll see in the home. And and I do want to make sure that everyone listening, if you have a free moment, go to the go to our YouTube channel at you, go to YouTube and just type in Brain Chip Inc, and we’ll have the links as well. Yeah, but remember to put them in the show notes as well for folks.

We have these these videos that we’ve put together that really show you where this is all going on. Smart Home and some of the devices and applications we will all find customary in our homes over the future. And I’m going to reel it back now that I said that. And I’ll tell you a funny story. If you ever bump into anyone that knows me. That really knows me, and you say that Rob tells them they will ask you, have you had a burger with Rob?

Nice. So I just want you to realize that there’s a potential that could happen.

Burger afficionado. So I’m going to take you up on that.

Rob, you drop my California dude on you is I’m a burger guy, too, and I’ve had a lot of burgers. So in back in the day, it was a very intense negotiation in a in the northwest Pacific Northwest. And we had a client or customer, that future customer at the time, but then became a customer that wasn’t being very receptive to the negotiation. And as we were sitting there having this discussion and I had a team of about six people who had a team of about six people, we’re four hours into this discussion and we’re getting nowhere.

And we feel all these people up for this meeting. I mean, this is supposed to be the deal. And I at the time kind of made up, hey, look, I’m really into burgers and I we’re in a town. We’ve never been here. I’ve got my team. Why don’t we take a two hour break? When I take my team and give me a burger recommendation, we’ll go get some burgers and we’ll meet back. And the guy on the other side said, You like burgers?

I love burgers. Matter of fact, I’m writing a book on burgers. And he said, well, if that’s the case, I want to take you to my favorite burger place. And we left everyone there in the room and we went off and and we had burgers and we had some French fries and a couple of sodas. And now and a half later, we had closed the deal. Right. And so that stuck with me everywhere I went.

And everyone that that has bumped into me and my professional career has kind of stuck to this. Oh, Rob, I got to take you to this burger place. Oh, Rob. And that has led to burgers in Taiwan, burgers in China, burgers in Korea, burgers in Europe. And I learned over time to to very graciously say, when I’m in a foreign country, I will eat the food of choice, not burgers, burgers or are here.

So I ended up writing a book for a close friend of mine on burgers because this person had experienced a lot of burgers with me as well as my wife. My wife has experienced almost every burger with me, but the first book I wrote I dedicated to my my buddy and my wife was like, Are you kidding me? You didn’t dedicate this book. I missed on that one. And it was for his fiftieth birthday. But yeah, I can I can I can hang when it comes to burgers and talk about burgers and anyone out there interested in giving me some advice on burgers, I’ll take it and potentially experience it.

When we think of like I said, this is why when when I listen to your podcasts and the content that you create and inevitably the discussions that you have with folks. You know, this is why I latched on to, you know, what’s going on with brain chip and the potential there, and that’s it’s it’s incredible to see you’re you know, you’ve got the you’ve got a choice to, like, go to a team and a team that it’s a it’s a beautiful thing because teams, especially in startups, you know, at different phases of organizations, they’re very dynamic.

And what you end up having to make sure that you do is you find this kind of like Top End’s, like that’s that vision, you know, what’s the reason why we’re here? And that’s the thing you say. I mean, Donald Miller sort of fancy is what you call story brand. And that’s it’s so it’s so easy when you when you get used to it and you start to hear it come out. But you realize that that’s it’s much more than that thing because I’ve gone into many companies and I see the you know, the thing behind the administrators desk at the front and I see the stuff printed on the walls.

And and you realize that the only person that took that in was the person that painted it. You know, it it’s not reflective of the culture. The culture is how you will deal with it. If the walls were white, you know, the culture is how they’d be if you’re not looking in. So, you know, knowing why you’re doing what you do, what’s your vision, your guiding principle, and then taking this and then being able to have sort of fantastic technology that’s in an exciting area of opportunity.

But it’s also probably a fun challenge because it’s not massively a lot like the type of solutions that you’re able to enable are still being developed. They’re still in a lot of research areas. So it’s it’s it’d be neat. In 10 years, you’re going to be look back and say like we did that. Yeah. It’s it’s cool to to see that. But it’s also this it’s a very interesting. Dynamic of being in organizations where you’re selling something where, you know, like they’re going to get this one day and then back and to be like, oh yeah, when he explained it to me now totally makes sense.

It’s pretty impactful. And the amazing thing about Akeda, it is wide and deep in regards to applications and and problems we’re going to solve. And it’s it’s you know, this is a unique we can only say so much, but it’s it’s an exciting time. And as you said, in these types of technologies, when you look out in the future, when you look out of where you’re going and how you’re going to get there and the impact that you’re going to make, my gut tells me that brain chips can be impactful and we’ve got a lot of work ahead of us.

But but it’s it’s a lot of fun. And we’ve got it’s going to be a good ride. Looking forward to it.

And especially to to go to a first principles approach in developing a solution is is amazing because we look at any competitive landscape. And it’s quite often the first thing that most people will do is just turn around. Yeah. And sort of head for head for the door and look for another object, because it’s it’s a big challenge. But, you know, that’s what it is. You choose you choose first principles to attack a problem in a specific way, do so provably, and then find people that can take that and give it an application, give it a place in which it can be leveraged and then ultimately see the as that comes to fruition.

So, yeah, it’s exciting. What’s the what was the most exciting surprise you had after, you know, you were like, OK, I’m at brain chip. I like the idea. And when did it suddenly go? You’re like, oh, hang on a second. What was that? That thing that that really kicked in for you?

Yeah. One of the things we haven’t really talked about is the learning on the device. And for those that are new to artificial intelligence and I brought it up a little earlier on on this podcast, is the fact that the. Training a device, teaching it what to look for and then process that information is not trivial and is extremely complex. It requires data scientists, research scientists, and it requires a lot of time, six months, a year and so on.

So having the ability to have a device where we can add a feature to it added my my face, your face, and then use that for facial detection or or for image detection or in a vehicle or something. To that extent on the fly is huge. And so I think it was day to day three, one of the engineers said, do you want me to do a demo of Akeda for you? I said, Sure. I sat down and started putting some stuff in front of and it was recognizing everything really quickly.

Very low power in Yemen. And then I, I said, well, what if I want to take a baseball and put it in front of al-Qaeda? And it was using a data set that had no idea what baseball was. And he said, sure, no problem. Put it up there. You know, you trained it on the device and all of a sudden baseball is now part of the whole data set. And I’m taking the baseball.

I’m throwing it through the the image. It’s it’s picking it up as it moves through. I’m like, wow, that’s really responsive. And fast forward again. And we just did a presentation about a week ago where we were teaching a kid beer bottles on the floor, had different beers, different brands, and then pulling them in and out of the screen and showing the viewers that, hey, look, it’s recognizing it really quickly. But we decided to show Akeda two beers from the same brand that the label and image was so similar to each other that we confuse other devices.

And we were able to recognize it. We were able to to to to process it quickly. And the whole on training thing is mind blowing and it will become impactful. For all these from from industrial applications through automotive through consumer applications in the future. So that’s the one thing that I get really excited about.

Yeah, that’s it, like on learning is yes, is it? This is where amazing things now are possible because, you know, I’m I literally just I’m overly excited by it because it’s just this the potential is incredible. And this is where I am. I’ve been lucky recently. We’ve had a lot more folks. We’ve sort of divx been diving in a bit more on machine learning and some of the capabilities. And then you realize like it itself is a fairly small community relative to, you know, it’s very, very much in the research phase, so much so compared to the broad computing frameworks, you know, so when you start to show them these capabilities and you hear about how they’re trying to solve the problem, they’re solving it in papers.

You know, it’s still stuff that’s being submitted as research phase. And yet. They can take that blackboard or whiteboard theory now and they can literally put it into place now because like you said, al Qaeda has this capability, like, OK, cool, this is no longer just what is the theory of of driving cars, going to self-driving cars going to be like. All right, let’s show you the other thing is that that and this has happened the other day and my guys were really excited.

They called me into a room like Rob, you got to take a look at this. I highlighted earlier not being dependent upon the cloud to do your processing. And there’s a company out there that that has a phone that most of us have or a lot of people have that has a software update coming out. And they talked about the fact that if you put your phone on airplane mode, you now can run their assistant on device without being dependent on the cloud, providing you more privacy and security.

And and my guys are jumping up and down when we do that, where we do it in hardware. I mean, we’re on to something here. And like we are, we are. And that’s the other thing that you’ll see. There’s a lot of applications and software. And for those that are listening, there’s a lot of great functionality in software, but it’s in software. And what that means is it consumes a lot of energy. It consumes a lot of power because our IP battery, because, you know, the if you’re solving it in software early, I’ve got bad news for what the power consumption levels are.

Exactly. So when we talk about edge devices and we talk about all the stuff that that’s that’s dependent on being independent and functioning without consuming a lot of power, software is a problem. So at the end of the day, so we look at this and we look at what these leading this leading technology company is doing. And we say to ourselves, we’re doing the right things. We are doing the right things. We just we’re just going to continue to march down our path and execute.

And and it’s like I said before, it’s the tip of the iceberg.

And it is and it’s it’s amazing what we can do and the potential is fantastic and yeah, first of all, thank you, Rob. This has been really fun for folks that wanted to reach you to chat further and learn more about brain. Like I said, we’ll have links to the site. We’ll put the link to the YouTube and thank you. And what’s the best way your folks want to get in contact?

Yeah. So if you go to w w w dot brain chip dotcom, there is a link for information. You can fill that out and it will go directly to our sales team, most likely. I’ll get a look at it as well. You can reference that you were part of Disco Posse and I will personally respond to you and start dialog. You can also go through the sales link at Freindship Dotcom. You can contact us through LinkedIn, you can contact us through Twitter at Brain Chip, underscore each.

But for those who are curious, I really, really want to emphasize we’re using YouTube as our platform for education and understanding of the company who we are. So subscribe to our YouTube channel, our Friendship Inc. And that way you’ll be updated every single time we have a new video, a new presentation. And and when this is is uploaded, I’m sure this will be on our our YouTube channel as well. But that’s those are the best ways to reach us.

And we’d love to talk to you and educate you more about brain chip and what we’re doing. But, Eric, thank you very much for having me. And this conversation was great. Thank you.

Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win Deals - Click here and because we had such good response we have opened it up to make the eBook, Audiobook, and online course, more accessible by offering it all for only 5$


Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Want to ensure your privacy is protected? I sure do. Privacy is a human right and the folks at ExpressVPN make sure of that. Head over to ExpressVPN and sign up today to protect your safety and privacy across any device, anywhere.


Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing

Chris Wexler is one of the Founders and CEO of Krunam, the best in class image and video classifier of Child Sexual Abuse Materials (CSAM).

Krunam is in the business of removing digital toxic waste from the internet using AI to identify CSAM and other indicative content to improve and speed content moderation.  Krunam’s technology is already in use by law enforcement and is now moving into the private sector.

We explore the seemingly intractable problem of CSAM, how Chris and the team at Krunam are working to solve it, plus the incredible story behind the name of the company. This chat covers everything from the technology and the ethics of the challenge. Thank you Chris!

Check out Krunam here: https://krunam.co/ 

Follow Chris on LinkedIn here: https://www.linkedin.com/in/chriswexler/ 

Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win Deals - Click here and because we had such good response we have opened it up to make the eBook, Audiobook, and online course, more accessible by offering it all for only 5$


Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Want to ensure your privacy is protected? I sure do. Privacy is a human right and the folks at ExpressVPN make sure of that. Head over to ExpressVPN and sign up today to protect your safety and privacy across any device, anywhere.


Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing

Slater Victoroff is the Founder and CTO of Indico, an enterprise AI solution for unstructured content that emphasizes document understanding. He’s been building machine learning solutions for startups, governments, and Fortune 100 companies for the past seven years and is a frequent speaker at AI conferences.

What is very interesting is that Indico’s framework requires 1000x less data than traditional machine learning techniques, and they regularly beat the likes of AWS, Google, Microsoft, and IBM in head-to-head bake-offs. 

Slater and I discuss AI, AGI, how to relate these topics to newcomers, how Machine Learning and ethics come together, and also how MMA relates to how he tackles startups and team building.

This really is like a lesson in AI and Machine Learning and really taps into the subject for both newcomers and veterans of the field. 

Check out Indico here: https://indico.io/

Connect with Slater on Twitter here: https://twitter.com/sl8rv 

Connect with Slater on LinkedIn here: https://www.linkedin.com/in/slatervictoroff/ 

Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win Deals - Click here and because we had such good response we have opened it up to make the eBook, Audiobook, and online course, more accessible by offering it all for only 5$


Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Want to ensure your privacy is protected? I sure do. Privacy is a human right and the folks at ExpressVPN make sure of that. Head over to ExpressVPN and sign up today to protect your safety and privacy across any device, anywhere.


Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing

Dan Burcaw has founded companies on the forefront of profound technology waves: open source software, the smartphone, and cloud computing. He describes himself first as a serial entrepreneur; a repeat startup founder and CEO with his latest company being Nami ML.

We explore a deep discussion around how leveraging services and systems to let your teams do what matters is both powerful in business and life.  We also talk about how Dan has created and operated his companies and some great personal insights into being a leader.

Check out Nami ML here:  https://nami.ml 

Dan Burcaw

Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win Deals - Click here and because we had such good response we have opened it up to make the eBook, Audiobook, and online course, more accessible by offering it all for only 5$


Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Want to ensure your privacy is protected? I sure do. Privacy is a human right and the folks at ExpressVPN make sure of that. Head over to ExpressVPN and sign up today to protect your safety and privacy across any device, anywhere.


Luis Ceze is a computer architect and co-founder and CEO of OctoML. I do research in the intersection of computer systems architecture, programming languages, machine learning and biology. 

OctoML is doing some very cool things about demecratizing ML and transforming how ML models are optimized and made secure for deployment. Luis shares a lot of great info on the foundations of ML, ethics of data, and how he builds a team.

Check out OctoML online at https://octoml.ai


Watch the Video Version of our podcast with Luis Ceze of OctoML

TRANSCRIPT

Oh, yeah. Welcome, everybody, to the Disco Posse podcast. My name is Eric Wright and be your host. And this is a really fun episode if you’re digging machine learning then look no further.

You’re in for a great conversation. Before we get started, though, I want to make sure I give a huge shout out to all the great supporters and fans and friends of the show. This episode is brought to you by our favorite and good friends over at Veeam software.

This is everything you need for your data protection needs. I trust this company with my data, my identity. My goodness, whether it’s on the cloud, whether it’s on premises, whether it’s in using cloud native and the new stuff they’re doing with their recent purchase of a company called Kastin and Integration. Really cool stuff.

Whether you want to automate and orchestrate the entire kit from end to end for full business continuity and disaster recovery with Veeam availability orchestrator, you name it, Vimes got all sorts of goodness for you. If you want to go check it out and you can easily go to vee.am/discoposse and also let us know that you came from ol’ DiscoPosse’s podcast.

It’s kind of cool, but the Veeam family, it’s hard to say the Veeam family are extremely cool in that they’ve been great supporters. I love the platform. I love the team. And in fact, like if you actually go back in our archives, you can hear Danny Allen, who’s a fellow Canadian fellow cyclist and also a really fantastic human who’s the CTO over at Veeam. I was really lucky to have Danny on, but at any rate, go check it out.

Please do.

I definitely believe in their platform, in their product go to vee.am/discoposse

This is also brought to you by the four step guide to delivering extraordinary software demos that win deals.

This is something that I decided to build myself because what I found is that I’ve continuously involved in sort of sales processes and in listening to folks that are struggling with being able to connect with people, whether it’s in product marketing, product management, sales, technical sales. So what I did was I took all the lessons that I’ve captured myself and from my peers and compress them into a very easy to consume concise book. It’s called The Four Step Guide to delivering extraordinary software demos.

The seats you how to demo, how to listen, how to connect, how to engage, and ultimately how to get to Problem-Solving in the way you show your platforms supercool.

Plus there’s an audio book, a course and I do regular AMAs for folks that that buy the package. So go to velocity closing dotcom and you can actually download the whole kit right out of the gate today.

With that, we’re going to jump right into the episode. This is Luis Ceze, who’s a fantastic person who I was so happy. He’s the CEO and co-founder of Octo ML.

Not only have they got the really cool thing, they called the optimizer, which is a fantastic name for a product, but they’re doing some really neat stuff around democratizing and making highly performing an insecure machine, learning models.

Really, really cool. So check it out. Plus, the Beast talks a lot about building the business, the educational impact of where technology and is so much cool stuff.

Anyways, I hope you enjoy the show as much as I did. Hi, this is Luis. I am a co-founder and CEO at OctoML and a professor of computer science in Washington, and you’re listening to the DiscoPosse podcast.

You’re innocent till the days go by the. So this is fantastic. I do want to very quickly introduce you as your company is doing some really neat stuff.

And of course, I say this as a as a precursor to what you’re going to tell us or for the people that are listening. We hear ML/AI and it becomes like this wash that it’s assumed that it’s like, you know, they always say no one believes what’s actually going on.

I’ve dug in and I’m excited about what you and the team are doing. So I wanted to lay this of this.

You have really are solving a very genuine and interesting challenge. And I can’t wait to kind of figure out how you got to solve these problems.

So anyways, me take it away.

Let’s let’s introduce you to the audience and talk about where where you’re from and how you got to begin the OctoML story.

That sounds good. Yeah. So I have a technical background. So most of my you know, I guess intellectually active life has been in computer architecture, programing languages and compilers. You know, I’ve my speech to the University of Illinois. I spent time at IBM Research before then working on large scale supercomputers like bludging, you know, primarily applied to life sciences problems. And at the University of Washington, where I’ve been almost for 14 years now, it’s kind of crazy to think about my research career.

There has been focus on what we call the intersection of new applications, new kinds of hardware and everything in between, you know, copilots programing languages and and so on. In about five or six years ago or so, we started looking at the problem of, well, the opportunity. I would say that based on the observation that machine learning is getting popular super fast. Right. Because, you know, machine learning allows us to solve interesting problems for things that we don’t know how to write direct code for.

Like, for example, if you think about how you can write an algorithm to find cats in the photograph, it’s really hard to to write the direct code for that. But, you know, the machine learning allows us to infer a program, learn a model from data and examples. Right. So this proof has proven to be really powerful and machine learning is permeating every single application we use today. Right. So but anyway, so six years or so ago, we started thinking about, well, there’s a variety of machine learning models that people care about for computer vision, natural language processing, you know, all the time, series predictions and so on, and a variety of harder targets that you want to run these models to.

This includes CPU’s, GPS and then accelerators and FPGA and DSD and all sorts of compute engines that have been growing really fast. So you had this interesting cross product.

If you have lots of models and lots of hardware, and how do you actually get them to run? Well, where you need them to run, that includes the cloud, the edge, you know, implantable devices, you know, with smart cameras, all of these things. Right. So and one thing that’s interesting to note in this context about machine learning models as computer programs is that they’re very sensitive to performance and they’re very computer hungry, the memory hungry, their bandwidth hungry.

So they need lots of data. They need lots of compute, therefore, making them perform the way you want them to perform, to be able to run fast enough and or use, you know, a reasonable amount of energy when being executed requires quite a bit of tuning your performance. Right. So that means that if you look at the machine learning models are deployed today, they’re highly dependent on hardware vendor specific software stacks like Nvidia with their GPS has cooled down and included a Khuda stack.

You know, ARM has compute led, Intel has indicated and you know, and then software at the height of it is have their own software stack in general. So this is also not ideal because then that means that from somebody who wants to deploy machine learning models, they need to understand ahead of time where they’re going to deploy, how they can deploy and use some custom tools that typically aren’t super easy to use. And that might not even be a software stack for the hardware that you care about.

That works well. Right. So long story short, the research that we started with a version of six years ago was to try and create a common layer that maps the high level frameworks that people use. Think of the data scientists use, like Tensor Flow, PI talks and so on, or numpty and bridge that to a higher targets in an automatic way. So we don’t have to worry about how are you going to deploy it, create this clean, open uniform layer that automates the process of getting your models from data scientists to production?

Well, this seems like yeah, it seems like a good idea and people would agree. But there’s a lot of challenges there, right? Because the way machine learning models are applied to. They rely on hand tuned, low level optimizations of code that really means like understanding the model, understanding the hardware and tuning the low level codes to make sure that you make the most out of that hardware. Right. So that takes a tremendous amount of work.

It’s not sustainable. So the research question that we start exploring was, can we use machine learning to optimize that process? So essentially use machine learning to make machine learning faster on your chosen hardware. And that’s what that was the that was how the Tensor Virtual Machine project was born. So we started this project six years ago, five, six years ago. And fast forward to today. It’s top level Apache Foundation Software Foundation project called Apache TVM has been adopted by all of the major players in AML, including the Amazon, Microsoft, Facebook and so on.

It’s supported by all of the major hardware vendors. It is actually the de facto open standard for deploying models on a bunch of hybrid targets. That is open source, right? So, so armed, for example, adopted to VMD as the official software stack. So AMD is building with we talk to him about, you know, support for AMD CPU’s and chips on on Apache to VM and then other companies like Xilinx, which makes upgrades and a bunch of other nascent hardware companies are using Apache to VM as their preferred software stack.

And just one final sentence, and I know this has been going on, but I just thought know there’s no there’s no rapide way like this.

This is a super important understanding of how we got to even the start line, which is even before where we are today.

Right, right. Yeah. So anyways, and then TVM has been adopted both by end users and and hardware vendors. And the way to think about EVM in one sentence essentially is this compiler time system to form this common layer across all sorts of hardware. And think of it as a 21st century operating system for machine learning models that runs in all different hardware. Right. So that’s Apache to it has almost 500 contributors from all over the world, has been adopted, as I said, by all the major players in the industry.

And we formed talk to him about a year and a half ago to continue investing into PVM, all of the core people around Apache to VMware, co-founder of the company. So these are three PhDs in Washington. And another co-founder, Jason Knight, was head of software products. And Intel left until the time to join the company. So I’ll come out today. It’s about 40 people. We our mission is to build this machine learning acceleration platform to enable anyone that a very automatic way to get the models deployed in a hardware that they want without having to fiddle or with, you know, different software specs or having to tune low level code to deploy your model.

Really, we are about automation and democratizing access to efficient machine learning because the tools today require quite a bit of various skills. So. And I think that’s where we really want to begin, is that every, you know, abstractions are generally done because it allows for obviously diversity of platforms above and below the line.

Wherever that abstraction layer is, the appropriate abstraction is a fantastic place where platform begins.

Then even further up is how do you organize a commercial entity that can create an additional value.

Even beyond that is a is really amazing because, you know, especially in a niche area like this where you look at look at the folks that are contributing to TVM and to who are obviously well down the road, you know, people are thinking that smell is coming like it’s already here in America anyway.

But so beyond the abstraction now, there’s that optimization which makes it you know, we’ll talk about the optimal approach to it. Maybe give a sense of what does a non optimized machine learning model do relative to an optimized one, because I think this is it’s hard for people that don’t get it to understand this.

Yeah, great scale.

I love that question, Erica. So so the UN optimized version typically means that you have you get a machine learning model and you run it through, say, tensor flow defaults, deployment model or Piotrek, and you choose the CPU or GPU. And most of the time what you get is not deployment ready because it’s not fast enough or uses too much memory or doesn’t make the most of the hardware and so on, or you don’t get the throughput that you want.

Or if you’re deploying the cloud, it’s too expensive because it uses a lot of compute. So now if you run that through TVM and just what it will do is it gets that model and then generates an executable that’s tuned for that specific hybrid target that you that you’re going to deploy it into essentially generates custom code, uses its machine learning magic that we can get into if you want. But basically to find the best way of compiling your model into your hardware targets, to make the most out of your hardware resources.

And the performance gains can be anywhere from two or three acts all the way to 30, 40 X. Right. So we have so if you look at our conference, for example, we had a conference in December for the past three years. There are cases of folks showing that there were up to one hundred up to eighty five X better performance. And we talked about anything above 10 X.. It’s not a nice to have it’s an enabler. Like if you make something five 10x better, you enable something that wasn’t possible before because it’s just too slow or too costly.

And that’s the level of performance gain that we’re talking about here. So and this can translate into enabling anybody that before was too slow to deploy. Now you can deploy it. That reduces costs in the cloud because 10x faster means 10x cheaper to run in the cloud and so on.

When this it also helps to answer the myth. I would believe that there is a hardware specific machine learning unit.

Well, there are obviously hardware specific iterations.

Each model, each data set based on scale, size use, you know, there’s a lot of factors that even the most perfectly designed physical unit with a broad set of use, whatever is going to be, whatever the right combination of things, may not be appropriate for every model.

Right.

So this is an beyond this is not like there’s like a really good gaming laptop and a really good you know, machine learning is at any it doesn’t take long before you get to the scale of using machine learning before even a machine learning node is not optimized for your particular model.

Absolutely. As another way of saying that, too, is that even if you have fantastic cadre, you know, and numerous resources, if you don’t have good software to make use of it, you know, it’s just no good for you. The question is, you know, it takes quite a bit of work for you to massage your model to make the most out of a hardware target. Right. And it doesn’t mean that all heart attacks will be appropriate for all models, but by and large, it’s dependent on very fairly low level, sophisticated engineering required to get there.

So and that’s what we all about. Automating a doctorate now. So you.

You have me curious and I’m going to ask you to go down the rabbit hole right away. How do you possibly at a code level through software to models on the fly based on hardware?

This is like my I’m lighting up at the idea of, like, getting Texaco’s you need to because I would love for folks to really get a sense of.

Yeah. Where those challenge is being solved.

Great coup. Absolutely. So let me just start once upon a time now. I’m not going to be that long. No, but I mean, well, you know, fundamentally, machine learning models, by and large, are a sequence of linear algebra operations. Think of it as multiplying a multidimensional data structure, but not think of it as a matrix matrix, vector multiplication, matrix, matrix duplication. But sometimes more than two dimensions would be imagine a three dimensional matrix to call the tensor.

Right. So in general, like a generalization of that, it’s about another 10. It’s a lot of linear algebra operations. Right. So now these are very performance sensitive because they depend on how you lay out the data structure in memory because it affects your memory. How can your cash behavior that depends which instructions you’re going to use in your process. Because different processes are diffuse. They have different instructions that are more appropriate than others. Like instead of doing a scalar where you multiply one number by a single, the nobody could use a vector structure which which applies to vectors at a time.

And there’s so many ways there’s literally millions, potentially billions of ways of compiling the same program into the same hardware. But among the billions of possibilities, some of them are vastly faster than others. So what do you have to do is just search, right? So given a program that’s your your and now model and give the higher target and there are billions of ways in which you can compile those, how do you pick the fastest one? OK, so now to answer your question directly, how do we use an amount for that search?

Well, the brute force way and I’d say the less smart way of doing this would be to try all 10 orbiters of possibilities. But the problem there is that you don’t have time. Imagine making a variant of the code, compiling, running it, even just that, even if it takes a second each, you’re talking about centuries of computes to actually you talk about centuries worth of time to actually find what’s the best program. Where Emelle comes into play is that as part of how TVM operates, TVM starts up when you create a new harder target, you runs a bunch of little experiments that builds a machine learning model of how the hardware itself behaves.

So this machine learning model is then used to do a very fast search among all the possibilities in which you are going to compile your models how to target among all of those possibilities, which one is likely to be the fastest one? And that can be vastly faster. Think of it a hundred million times faster than trying each one of them. So now you enable this ability of navigating the space of configurations in ways in which you can optimize the model and then choose the best one.

OK, so now a machine learning model has a combination of these. So we just apply this subsequently of every layer of a model and then we compose and see how they compose and run it through the prediction. And then again, we validate like are we doing a good job in the way we do? That is by doing the full computation and running of performance tests and comparing are we doing better? Yes. And we keep the search going. Does that does that give you a general idea of how to do that?

It does.

And this is the. It’s the interesting challenge that we have with anything that’s any long running process, even if, like, just think of just traditional batch computing where where folks live, a massive long batch.

And at some point your you know, let’s just you know, for folks remember the days of the overnight jobs, right. So they’d have some four hour batch that would run. And you’re five hours in and something’s wrong. And there’s the difficulty of assessing.

If I stop now, optimize correct code, do something and then rerun, is it more worthwhile to do so versus just letting it run out? And it’s going to take twice as long as I expected. Like, that’s a relative number that I think a lot of folks would remember, even if it’s like a five minute script, if it takes five minutes and it should take 30 seconds, you know, makes sense.

But like the the scale at which you’re talking about, number one, to the initial problem of like where we’re going to go and use a model against a mass data set, it’s going to take.

Potentially hours, days, whatever is going to be significant. But then to run scenario, run that scenario. Repeatedly before triggering it effectively defined the most optimal place in which the host exactly is in fact identifiability to the right.

Yes, I think you could run those running, running parallel simulation modeling.

This is you anybody would think, oh, of course, where you’re going to use machine learning, like, well, you’ve now got an inception problem, right? Your effect, you have to do something that’s incredibly complex. To solve an even more complex problem, but. It seems untenable for people to imagine that because this could be done, so this is why this is this is like, yeah, it is how we do it, that we use this machine and others to make it.

So now we actually let me let me go and ask a question and ask her, like, what do we offer as a company? What is what is the commercial story here? Right. So anyone TVM is opensource, right? Anyone can just go to a package to PVM GitHub repo, download the code and run it. But Psyllium takes, you know, to set it up because he has to set up harder targets. And then you have to collect these machine learning models that predict how the harder behaves.

And and, you know, it is a sophisticated tool that it works really well. But you do it does it does require quite a bit of lifting to to get going in the context of of an end user. Right. Well, we did talk to a man who has built a platform called the OCTA Mizer, which is a fully hosted software as a service offering up to Veniam that automates the whole thing that has a really nice graphical user interface.

We can upload models, choose your harder targets, click the magic button, optimize, and then a few hours, maybe a day later, you get an executable just deeply optimized for your your higher target of choice. And the way that this is different than the experience using Cavium, as I said, it’s much easier. You don’t have to stall and do anything. There’s no no code required. And you just literally upload the model, choose the target and download it, or you can use an API.

But also the optimizer has no models and has pre a preset set of these hardware targets. Optimizer is built on machine learning that are ready to go for use, who don’t have to go and collect yourselves just because you can be protected from days using the optimizer when this is what I think is incredible.

And I spoke with somebody very recently and we had we just we’re just I’m enthralled with this idea of where we are today. And, you know, now at twenty twenty one, as we record this, it’s like.

The accessibility of both models and training data to like, if you wanted to try and get into the business of machine learning, just even to dabble with it, to get the hardware, to be able to have some data, to be able to the like the one or one layer of machine learning was very low level, like very simplified, and there was no access to go beyond and really test it. So now because like what you’ve got with the optimizer, like I said, you’re shipping stuff that’s there and it’s ready to go, so you don’t even need to then wear it like so like those first steps are incredibly challenging.

And and this is what I want to impress upon people that there’s effectively no reason why you wouldn’t just get started because it’s been done for you and it’s accessible to you now. Like, it’s it’s a wondrous time where we can do these things, because for all of the things that people are worried about, you know, one, I don’t understand complex mathematics. So how can I do with machine learning? Well, it’s not necessarily.

Not exactly. It’s about abstracting those away. Right. Right. And secondarily, how do I know how do I learn to trust what machine learning does? The only way to do it is to get in and see it. It’s a weird because machine learning has this really odd thing.

Even when we talk about A.I., sometimes it’s I describe it as like the scene from The Matrix when, you know, when when the Oracle system, you don’t worry about the face. And then he says, what? And he turns around, he knocks a base off the table and she says, what a really make your noodle is. If you would actually would have done that if I hadn’t told you about it.

And when we explained machine learning and like what you get, like you said, how do you find a picture of a cat? How do you tell the difference between a blueberry muffin and a Pomeranian? Like there’s all of these things where. There’s people don’t trust the outcome because they saw a meme about it one day, but you can dove in. You can test it out. You can put data through it. You can see Output’s.

It’s it’s there today because of what you and the team and what what the community is doing around this stuff, which is pretty amazing, right?

Yeah. And I want to pull that thread for just a minute on how the machine learning models, which, you know, there’s a whole sub field of machine learning, which is about explainable A.I. or excitable machine learning models to get people to trust and more. But I would even start by saying that how do we trust software to let’s forget about machine learning. Let’s just think about software the way we trusted by saying, like, we’ve put this much time testing it and you will have some confidence that it’s likely to work on the scenarios where your users care about.

We do not do formal verification of all software today. You don’t want to formally verify Excel or Oracle or Microsoft score. Basically, you just if you test it extensively and then you have confidence that it behaves the way you expect it to behave, and then you put a checkmark in any ship it machine learning models, they are that way too.

You have a training session, you have a test set, you train it, you test, and then you can do all sorts of ways of actually get more serious to the test. But, you know, that is going to work within the set of inputs that he was, you know, certified for, tested for. It works well, right. So, yes. So then you can go all into to this a huge fun discussion that we could have at some point.

Probably not. Now on how you explain to humans what is it in a machine that humans would trust them better.

Right. So and yeah. So that might involve compromising performance. It could be you might want to choose a model that’s not just as fast, but at least when you look at internally it works. You can explain to humans it might be useful for, say, medical diagnostics where you want a doctor to see, like, you know, this kind of like generally looks right to the decision tree. Here it is. Right, right. So then we can help with those cases, too, with integrating the optimizer, because if you choose to use a model that’s not as fast just because it’s more potentially trustworthy, we can help you recoup performance by giving you a highly optimized version.

And this is where, you know, I would say that the people that realize the difficulty that they’re facing, I’d like to get like, how do we get better at machine learning? You brought up the most perfect point. We don’t we just broadly trust software as if it’s like if it’s linear in its ability to scale. We were like, oh, I can almost run as fast as the machine. So they’ve we’ve just it kind of we grew up with it.

So we don’t distrust it as much as we we don’t necessarily trust it, but we don’t distrust it. Machine learning and quantum and the idea of being able to scale far beyond human capability. There’s this really odd. Case where the distrust is greater than the trust. And even though there’s no fun, I mean, this is I mean, effectively, it’s a lot of the core and the fundamentals of like behavioral psychology, you know, because of the way that we we place bets, the way we we think about, you know, outcomes versus efforts.

It is really funny or peculiar, I should say, you know, to see how people behave, but yet when they see the outcomes, like you said, they’d be like, oh, OK, now that’s fine.

I make sense, but. When you go one step further, which is especially the folks that are going to be, you know, customers and folks that you’re talking to. They’re further along where they know the risk of, you know, yeah, and the benefits outweigh. Yes. So they but the benefits to outweigh but the benefits outweigh the risks. Right, exactly. And also, I mean, trust the kind of stuff that it’s I guess it’s a kind of property kind of feeling that it takes a while to build, but it’s very easy to lose.

Right. So it takes a lot of work to build trust. And it means that investing, learning how you can live with it for a while and it works really well. But then you make a small change because models evolve fast and then that one breaks and he makes you lose some trust in it. But, you know, that’s just part of how it is.

And I feel like given the strides made in machine learning, research and getting models to be more trustworthy, more explainable together with all of the machine learning systems work, which is all we focus on in making these models perform and run well in the real world, I feel like very, very quickly going to we’re going to trust them just as much as we trust software and, you know, things that are really transformational to our lives, like self-driving cars, like automated diagnostics, like, you know, using A.I. designs, drugs and therapies and diagnostics is just such a special for us that the the progress that and the impact it has on human life is so far beyond the risks that he can cause, in my opinion.

You know, this may be philosophical, but I do think that in this case, the benefits far outweigh the risks.

So I’d be curious, especially because you’re obviously very close to it near you. You were you’re doing this in academia as well as in business. So you’re really tackling it on two streams, which is always amazing.

And I think that’s where a lot of the stuff comes from. But in fact, a lot of technology, amazing technology startups have been founded from academia and made their way into commercial business. And then those folks maybe get into venture capital. And it’s neat to see this progression.

But, you know, there are very few people that most people know.

And I wherever the descriptors of most or many, but who they could look to in and get that first understanding of the impact and importance of machine learning on society. Obviously, one that I know off the top of my head, of course, Cassi Korsakov.

She’s with Google and a fantastic person that truly does a lot to sort of share. The human side of of the value of machine learning and, you know, it’s neat to see those stories. So I’m curious, Lewis, who in your peer group and yourself included, like, how do you how do you get people involved and interested in the potential that we have as a society because of machine learning?

Yeah, great, great question. The way I think we get people interested and excited about is just by continuing to show the kind of problems that we can solve, the kind of new applications that we can build with with machine learning. Right. So let me let me take a recent example, seeing all of the progress going on on this large language models based on or three, for example. I mean, the ability the ability of summarizing text is fantastic with generating new tax is great to help you draft these technologies.

Just seem like magic. They work really, really well. And and I think that has the potential to amplify ability to understand large bodies of tasks of texts. Right.

So, for example, some of my colleagues and friends at AC2 here in Seattle had been working on these tools that help one understand how bodies of knowledge in a specific field. They’ve done this for covid recently, for example. I think it’s just really amazing applications that can capture the imagination and have a direct impact right now that really gets people more excited about it. I’m not sure that’s what you’re asking. No, I think it’s all about showing great.

But then, you know, just seeing the. So that’s one of them. The other one seeing the I know that we’re so far away from fully autonomous vehicles, but just seeing the kind of things that are ever more accessible electric vehicles from big ones like, for example, Tesla. I’ve that a model, a model three can do Real-Time Computer Vision and build a 3D model of the world around it. And you see, you know, the cars and people crossing the streets and then, you know, like this thing that is happening all the time.

It’s like, oh, this is a model. The car is actually agrees with them. Just as you get exposed to this, you get people more and more and more and realize how how exciting this is. So think about the applications that it enables.

And then a final one. It’s more academic. What’s becoming more top of mind today that I find particularly exciting and happens to be related to one of my personal intellectual passions of, you know, molecular biology and life sciences. I think that nature is a boundless source of two things, you know, mechanisms that we can use and molecules that can go and used to do beaches and things. And then second is all sorts of interesting problems that you can use a and the mouth to understand, you know, how nature works.

And it has tremendous impact on on understanding life and on understanding disease and understanding new therapies and so on. And there’s some things I think it’s fair to say that the strides that we’ve made in understanding, you know, gene regulatory networks and understanding, you know, a lot of life sciences processes would not have been possible without machine learning.

And right, so and. Yeah, so this has an incredible effect today, like, you know, how we can design a vaccine super fast, how can actually test it super fast? How can actually understanding do DNA sequencing of of of different people understanding? What is it? How did it correlate with things that you observed? I mean, this all boils down to it is enabled by conventional processes, largely based on machine learning.

And that’s one of the most you know, I don’t have the numbers handy, but I you know, I know it’s a good example to use, but as far as like the the the economies that we’ve achieved of time and scale is, you know, look at sequencing DNA, both the physical exertion required to do so on like hardware, the time and the cost in 20 years time or 10 years time even what it doesn’t take long to go back and see.

It was thousands of dollars in order to and and, you know, the amount of time required to do so versus now it is pennies on the dollar in effect relative to what the cost was not too many years ago, but.

Absolutely, yeah. And they should mention as one of the one of the research areas that I’m still active is on essentially using DNA for data storage, which would involve writing DNA and reading DNA sequencing. And this relied on on the progress of DNA. So I watch these trends very closely. And just to put numbers there, we’re really talking about, you know, the first human genome, the sequence. It was a huge landmark a couple of decades ago, actually cost over a billion dollars.

And today you can do a full you can do a full genome sequencing of, you know, under a thousand dollars, which is just you talking about a million faud, literally a million fold decrease, a million acts, decrease in cost. And then the amount of and this is all, by the way, enabled not only by no better understanding how, you know, it’s, of course, the genius idea of the next generation sequencing. But from there to today, a lot of it is really advances in computing infrastructure because it’s very complex, intensive advance in imaging technologies and optics.

Right. So and advances in machine learning, decoding very faint signals to read the letters that are in the DNA sequences, just. Yeah, all roads on the backs of Moore’s Law plus, you know, computing. That’s right.

Well, it’s interesting to see you as we come through. There’s a beautiful sort of readiness that’s arrived of all of these criteria. Right.

Like you said, you know, computational power, the understated scientific understanding, all of these things, they they move enough in effectively like a horse race beside each other.

And when one crosses the line, the rest cross very shortly after because one effectively carries the other. And there is this merger of things that has to occur to get then from there exponential increase in capabilities.

And we’ve seen so much recently and we as humans, we far overused the phrase exponential.

Right. People like us. And there’s a literally I talked with Joe back to you ahead of the founder of a company called Quant Gene.

And and he talked we talked a lot about that. And that’s but that’s their whole thing, is they’re using quantum computing and genome sequencing to find. Better ways to detect every kind of cancer, he says, but 10, 20 years ago, you would have a team of scientists and entire research area that’s focused solely on researching one type mapping, one type of cancer. And now, because of the ability in quantum computing, the ability we have in hardware, software and people and understanding, they can seek every possible type of cancer collectively through the research they’re doing.

And this is really like first principles like this is exponential growth in what we can do as an outcome because of the technology that we’ve enabled.

So what you’ve done and what you and the industry in your peer group and all of us are they’re doing is. Using first principles to do to set the stage for. An unlimited amount of new first principles thinking, going to do fantastic things, yeah, it’s a great point. And the way out outside this conversation back to what OCTA Mountain does is there are a lot of problems today or opportunities today, specifically in life sciences. For example, if you’re doing deep learning over genomic data that, you know, it’ll be without significant optimization would be beyond the reach of most people talking about problems that could literally take millions of dollars worth of compute cycles in in in cloud services.

If you could if you make that 50 X faster, the problem that takes billions of dollars cost in the tens of thousands of hundreds of dollars, which something that now becomes feasible and is also something that we’re very excited about this. You know, what we are doing is because not only do we make it more accessible to enable applications that we are doing today and make them faster and more responsive, but also the kind of optimization degree that we offer could enable things that would be beyond the reach of many today in application areas that are more custom, like, for example, what is life sciences when I think it’s one one great example.

So, yeah, and I think this is the fantastic opportunity that you have got now for your current and future customers is that it’s no longer about baseline achievement, but we can immediately begin to think of optimization versus that wasn’t accessible before. That just wasn’t it was just a matter of can we do it? And now it’s can we do this and are we doing it in the most effective and optimized manner.

Right. Yeah. And which which is often necessary like so to actually make it. Let me let me give you a without disclosing anything sensitive. You know, we’re being we’ve been working with customers that both deployed AML at scale and the edge and the cloud on the edge side. Think of it as if you had the machine learning model that helps you, that helps you extract help you understand the scene so you can replace objects in real time, say, for video chat, for example.

And then you have a you have that app running all sorts of events and devices like, know, different types of laptops, PCs or tablets or phones and so on. Once you have a model like that, what you have to do, what you have to do today to deploy it is by every single time you had to go and optimize and make sure that this is run fast enough on this and on on this ABC and any that different modeling that that is like, you know, it’s just really the unsurmountable of but not automating all of that, you know, which is what we do with the optimizer is something that is enabling, you know, the evolution of these applications.

And on the cloud side, you know, if you’re doing things like, you know, computer vision over large collections of of of images or video and to a large scale. So this could cost, you know, an incredible amount of money. If you don’t optimize writes, it means that you until you hits a certain cost target, you can’t you can’t even for companies that have deep pockets, that’s so significant what we’re talking about here.

So and it becomes the interesting conundrum of in order to test to see that your your model is effective and how long it’s going to take to run and what the optimization opportunities may be for it, you run it against your data set, but if you run it repeatedly against the same data set, it’s actually goes counter to the value that is dangerous if you continue to run like you’re not going to get expected results. And it may sort of skew some results if you send exactly the same data through exactly the same model over and over again.

Because they.

You do it again. Yeah, yeah, yeah. But usually it’s a wee.

So so that’s why effectively people are probably going to sort of throw up their hands and say, hey, you know, at least we know it works. We don’t know that it could run faster. So there was sort of an unfortunate acceptance up until, you know, what you’re bringing to the market that there just was it was just the cost of doing business in Emelle. Right. And that doesn’t need to be the case anymore, does it need to be.

Exactly. Does need to be the case. And these tools and he doesn’t need to be the case for as many possible users as far as as we possibly can. That’s why we strive for really easy to use and really making the level of abstraction much higher. So instead of you having to bear a super talented software engineer with a data scientist to go and do these things, going to be able to have the data scientists themselves to just go and use a tool that subsumes the need of having to work closely with this engineering team to deploy it.

Right. So, yeah.

Yeah. Well, yes, this is the thing of. We can now actually get positive business and societal outcomes instead of just technological outcomes.

I think one of my favorite things I remember Peter Tiel, he refers to he says we’ve we were trying to get Star Trek, but all we got was the Star Trek computer.

We didn’t get the tricorder. We didn’t get the transplant. We didn’t get the other things. All we got was the the computer that, you know, and and in fact, that’s the dangerous place to rely on. You know, we need to do things with these things. And this is why we are now at the point where we can really do amazing things.

Absolutely. And especially if you are a scientist. Right.

So I’m actually curious, Louise, what is a data scientists? Because I started to get different pictures of what that person is today, so if I’m an organization that I’m looking to hire a data scientist, what’s that profile of a person look like?

I’m curious in your experiences, given that you’re obviously very close to the field.

Yeah, no, that’s that’s a great question. Also, it’s a great question. And there’s just so many possibilities here. I tell it, say it really it really depends on what kind of problem we try to solve after data. Scientists tends to specialize in different kinds of data rights for different kinds of models. I would say that we should approach our, you know, see what kind of data you have to probably try to solve and go after.

Data scientist had zero domain experience because if you have some domain experience, you tend to get a lot better, you know, more predictive models and a lot better analysis out of the data that you that you have. What I think you actually focus people that say that the folks and people understand the problem domain and understand is the you know, the core tools in machine learning and data and analytics and statistics. Right. To go and work with your data now to go full circle.

Now, what I think is harder is trying to find a data scientist that can do that and also can do all of the complicated, ugly software tricks and they have to do to actually get get the model to or get the results to be usable as an end product. This is almost impossible to find somebody like that. This is why, you know, when we do, because somewhere early on in the life of the company, we’re doing some interviews to see what is it that we will be going after.

The number one pain points that we heard from folks that were running these things is that, well, you know, we have great data scientists and we’ve been doing better because the tools for the science are getting better and, you know, and there’s more. But now we have to go compare them was with very rare software engineering skills. And that’s what breaks the whole magic tear, because now you have the data and the data scientists just don’t have the rest of the resources to go and make their output be useful.

That’s where that’s where we started. Like, let’s just go to zero in on let’s automate the process of what gets out of the hands of data scientists and what should be the deployable module and get that gap and cover that with, you know, very sophisticated automation that uses machine learning. That’s really what the optimizer does. Right.

So first of all, my favorite name on Earth of a platform optimizer.

Sounds cool. I’m glad you like it. We love it. Yeah, the optimizer is definitely yeah. Every time I say like, it makes me smile. I’ve been saying this for over a year now and so I love. Thank you. Thank you for that Eric. So I hope I answered your question, but yeah. So how are you. Data scientist is I’m glad the tools are getting better, but it’s just so dependent on what kind of problem you want to solve that.

Yeah, it’s really about people understand the problem domain.

So it’s it’ll be interesting to see because I think we face right now as a society and businesses and governments is the sense that you’ve got to wait for the next.

We have to wait for the next batch of students to come up through the education system with access to the tools. So you have an eight to ten year cycle before people are actually able to do. And and in that amount of time, since we have so fundamentally change now, we don’t have to wait for that. We can we can train people in place. We can up level people where they’re at, through software, through technology, through capabilities.

It’s yeah, it’s an interesting point. I’m not sure if that’s where you’re going. And so it is a complete tangent here, but I think it’s fascinating to think about the role of A.I. and now in machine learning, it’s actually in educating humans. Right. So right. So there’s like ways of using AML to generate problem sets for kids to learn the ways of evaluating their kids. Yeah, so. And using that to actually train engineers. Right.

So the the potential for this stuff is just it is wondrous.

You know, there’s obviously there’s and I’ve talked with a few folks about some of the challenges around the ethics and biases.

And I and definitely I think what it’s superimportant, extremely important and tough on him.

I know I’ll ask you this kind of in your let me lean on your academia side, because I especially, as you put it, my Professor Hattab, my professor, had said you’re you’re very you probably that’s probably an area that where it gets dealt with or questioned the most. Is it through academia? We’re studying, you know, what are the potential like in business? It’s more like how do you, you know, broadly get this out in the world.

But we are finding through, you know, through thinking groups and through, you know, think tanks and through universities and the academia, like we are now at the study phase or continue to be when we’ll be for a long time in the study phase of.

How do we make sure? That we are as best as possible using these tools and this data. You know, it’s a real conundrum, because if it if it’s a representation of society, how much do we steer it in order to get what we hope to get out of it? Versus if a machine learning model gives you an output and we should there’s a reason it came up with that output we made up, trust it or understand it or maybe not like it, but it’s more like looking at how it got there than trying to, you know, stand at the output phase and then try and steer it towards a belief or an opinion, which is.

Yeah, well, this is a great question, super deep. And again, it could be a topic of a long conversation, but I would say that. Right. No, no, no.

But I’m I’m happy to offer some some thoughts here, because I do have colleagues and friends that I think about this for a good chunk of their waking hours, so. First of all, I mean, absolutely, we have to be mindful of biases in in machine learning, especially because of machine learning being dependent on training data. We need to make sure that the data is representative of a broad set of uses that’s actually equitable across all of the stakeholders in how this model is deployed and is aspects of the model architecture that should be training should be developed in the loop assuming.

And I think that comes fundamentally from having a diverse team. Right. So if you have a diverse team of people that are working on there’s a diverse engineering team or diverse team of of of data scientists as a team, that actually doing this naturally will point out deficiencies in training data and the architecture of the models. Just so with that as the people aspect here, that if we talk about machines doing more and more things, you have machines, you have people designing machines and designing engines that these people themselves need to be there.

Is this why I’m a firm believer of extremely diverse things? I’ve done that in academic teams that I’ve that I’ve built and know, pay a lot of attention to that at Octonal as well. That’s one thing.

And then the second thing is just through education. Right. So we have to keep bringing up these aspects of of of bias and make sure that he works for all the stakeholders, not just the machine learning, by the way, but in any engineering discipline. So there’s a friend of mine that once gave a talk and you to put his name here, but so about bias in machine learning. And he started with a great example. They might have heard that one of the very first photographic films that you heard that story before, photographic films that Kodak that essentially we’re talking about the chemical engineering thing like you designed the chemistry of our bodies and the way they designed the photosensitive material.

They realized that the way they were judging whether it was good or not was by, you know, checking this against a certain set of people with a different skin color, with a specific skin color. That means that if you actually use this in other skin color would just not work at all, which is not look right. And it was the case. So that means that they were they were biased. They set a great example that bias in how we evaluate whether something is ready or not for all the stakeholders is just not applied to machinery.

But any engineering discipline in this case, I thought was a really great one because it talked about something that, you know, on the order of a century old. Right. So it is just and then the way the tone of the film was and good for all of the callers. And it actually showed, you know, as one historical aspect of that. And that’s that’s true. You know, how you design you know how this affects building architecture.

There’s is it’s like a lot of things that humans use should have this thinking, not just machine learning. Right. Just that machine learning gets that extra aspect, because right now it’s enabling applications that it’s not a machine that gets extra attention on this because of how their applications is changing our lives super fast today, but also because so sensitive to data litigation is so fast that, you know, leads to a lot of misfortunes and, you know, let’s say missed opportunities to make it better early on in its.

There’s so much positive, but unfortunately, what will happen is the one the one negative story will be the one that becomes the focus. Quite often it’s like with anything. It was interesting. I was at a an event a couple of years ago says that it’s almost feel like it’s been that long since we’ve been at in-person events.

And it was a Canadian insurance company that had created their own their own call centers with EHI machine learning, all the stuff. And they basically fed it every single call that they’ve ever had taken with a customer service call and and trains this. And then they finally this was the moment where they said it as the to answer the next call. And it took the call and it dealt with with the person and they said it goes all the way through. And like, obviously they’re listening and monitoring like what’s let’s see how it behaves and it gets all the way to the end, solves the person’s problem in a perfect human sounding voice and gets all the way to the end.

And and this is the closing of the call. Then the machine says, is there anything else that I can help you with today?

And they said, yes, they stopped and looked at each other the like.

We that’s never been in a training manual. It’s never been there’s nothing that tells it to do that. But through all of the different calls and all the different ascertains that this was the best way. And they said then what was even funnier was the response. The person says, no, thank you, but I just want to thank you, especially because it’s so nice to talk to a human for a change right now.

I love that. Yeah, that’s it. That’s it. But this is the. There there’s going to be a beautiful call, like an augmented world, where we can leverage machine learning in these capabilities with like natural language processing and all these different things, we can use that like here.

But companies that are using it to detect, you know, emotional changes in people’s voices and they’re using effect to detect changes in their behaviors that, you know, could be for people that are at risk of suicide or there’s, you know, so there’s so many incredibly positive things.

And this is why, like I said, when so we have a friend in common, Amber Roland, who is, you know, you know, through your you she helps you with your PR and just fantastic human.

And she’s done a ton of stuff, you know, introduce me to the great people as well over time. And she’s like, every time I talk to her, it’s just like this, like, oh, yeah, here’s the human side. And she’s going to introduce me to people that are doing. Big things in when she said, I want you to talk to these folks, to talk to them. I had to race to the reply and say, I’m so glad that you did.

Yeah.

Now I know, of course, because like you mentioned, it’s tough. This is the tough part.

It’s hard to have hero customer stories because a lot of the customers you have, obviously, there’s going to be sensitivity and there’s and you’re ready, you know, in the in the birth of the company.

But you know what is maybe another quick example of a real human outcome that you’ve been able to see come to life. Well, yeah, great, so that we have several of them, right, so we. Let me let me if I can just pick up into, like, what kind of customers we work with today. Right. We have two categories of customers, one hour in which you learn the end users. These are companies that deploy that have products that use machine learning both on the edge and any of the cloud without getting into specifics.

I think of it as enabling much more natural user interfaces. I’d say that this is has, you know, a human outcome, because if you actually enable a new way of using voice, basing their faces in in very cheap, low, low end devices, you can buy them into more, more user scenarios and therefore have both add added convenience to people that are able and also add that ability to people that do not have, you know, that that are potentially disabled.

Right. So let’s say that is like a really nice outcome of just enabling more intelligence and intelligence. Think the edge is something that we have customers that we have enabled to do so. But customers are just machine learning and users and then also enabling hardware vendors that do not did not have a solid software stack to make their hardware useful for machine learning and then enable them to to to do so. But I’d say that in general, like the impact on human life, what we do is, again, one, enabling applications that weren’t possible before in terms of telling you the edge and also enabling these large scale compute problems that could be related to, say, life sciences, you know, that would not be accessible without the level optimizations that we provide.

So that’s how we got really proud of what we do in terms of the and, you know, and the impact in human life is we didn’t have any applications and even things that would be possible before. So.

Well, the the thing that I I try to remind people, too, is, you know, when we look at phases of of adoption and real life, if we look at sort of the hype lifecycle of so many things and we talk about edge computing for a long time and people still sort of struggle with what it means, but in effect, the the phone, you know, in a way, the phone you hold in your hand while it is a computer that’s stronger than the computer that since the first humans to the moon, it is an, in effect, an edge device.

Edge devices aren’t just raspberry pis that are glued to the side of a cell phone tower. They’re going to be computing. They’re distributed with different physical capabilities, different memory, different storage, different network, different CPU. And this is when. The ability to use decentralization, it will this is the again, exponential effect is that we can rather than taking collecting the data, they’re stream it back to central storage processing, essentially streaming it back the amount of bandwidth.

It’s it’s untenable. Right. And this is why being able to do processing and machine learning at the Edge is an amazing leap in. And what we need to do.

And this is what hammers home the value of what you’re doing, because there is no way that the model you’re going to run centrally is going to be run the same way at the edge of the hydra’s different.

Everything is. Yeah. So I love. Yeah, you said it exactly right. And I just said one more potentially overly dramatic point here, that which is speed of light is limited and light is fast, but you cannot make it faster. You know, if you had to actually have to go and you know, the speed of light is a limitation in in wireless, in any communication. Right. So not to justify this in any any communication.

So that means that some things fundamentally have to do at a very short physical distance to actually enable low latency and not having to rely on a long, long range infrastructure. This all of the hopes that he has to jump. So being able to compute and the edge has this fundamental enabling, like back by, you know, hard laws of physics that you must run this locally advisee continuous application. Right.

So, yeah, it also just enables low power, right? Yeah.

There’s this is the reason why people hate Bitcoin, not just because of most of the people that got in early and got rich was because of the physical impact it has on compute requirement.

And so there’s always this comparison of like, oh, you know, for every bitcoin you mine, it will basically you could power a city for a year or whatever it’s going to be had.

But this that is that’s a sort of a mythical historical thing. But beyond Bitcoin, when we look at. Yeah, using block chain, using machine learning all of these things to be able to do them on lower power, diverse hardware platforms. Yeah.

This is this is the Gutenberg Revolution of Machine Learning.

Wow. Thank you. All right. That was beautiful. Take the good. Agreed. Yeah.

And also to free people from having to even think about how they can deploy models because it’s just so that course like can even as I development to knowing how you going to be used. But how do you know. I mean there’s so many just think about mobile phones like, you know, there’s literally 200 different Android phones. So how are you going to tune for every single one of them right now? It’s just like this, a very small example when I just think about it came out as soon as he could run to the phone grid and a camera could run on a smart trained on a smart device and on the smartwatch and all of these things, just not having to worry about where it runs, could enable a whole wave of innovation.

Right.

So, yeah, this is so you must be excited. To be able to be both, you know, in academia and watching this world evolve and now you can very literally create the future through what you’re enabling at Octo Amelle, this is how good does it feel when you when you began this journey?

It’s got to be challenging.

And I say this like, obviously there’s no easy path to entrepreneurship and.

Yeah, well, thank you I for that question, because I used to present to Forsys how lucky I feel to have the team that we have. And I think that has one of the reasons that I think we have such a fantastic team is because of our connection to academia and the fact that we are a company that has a bottom line to to it’s you know, we have investors, we have customers have employees. And luckily, when we are in a very good position and that means that we’re not we’re not a research group.

Right. But we have a lot of we are really pushing the pushing the state of the art because we are a deep technology company. Right. So we are enabled by the fact that we had people with the products, that we build everything. But the fact that we actually had people that think on the frontiers of what’s possible with machine learning, like using machine learning to make machine learning better. And the connection to academia, I think is is really important and extremely synergistics and I would say essential to us because we are connected to the latest and greatest in machine learning models and the latest and greatest understanding of where even the hardware industry is going and what’s possible there, but also as a source of talents.

Right. So our company has incredible, incredible, incredible talent. We have more than a dozen PhDs in the team and a team of 40. Not that, you know, it’s just about that. But everyone is great. But I’m just saying that just showed the level that we are operating here in terms of pushing state of the art that we have a lot of people that, you know, operate like software engineers and making a product, but they all have a research mentality and research background and always think about how is it that how can I do something better than was done before?

Because that’s how a lot of folks have done research, you might think. Right.

So that’s and that’s very fortunate. Yeah. Yeah.

It’s always that tough metric when you like it. And I believe everyone should be proud especially to say, like, you know, we have a number of PhDs at it. At my own company. We have the same thing we talk about sometimes.

And it feels odd sometimes to say it depending on what the context. But the truth is, what you just said is that there are a group of people who chose to go above and beyond in order to advance something that had been done before that could be done better. And then when you bring a specialty machine learning of all the technologies and the things that we’re doing in the world right now. This needs those advantages for sure, thinkers who are willing to do what they did before and as a group, as a collective, and it’s also important that you don’t have one PhD because then having multiple.

Thinkers that way, people who’ve lived that life, they have the ability to use critical thinking as a group. To aim for the best outcome, not the right answer, the best outcome, and yet as humans, especially as entrepreneurs, we often get stuck with.

I’ve got the right answer and I’ve just got to teach the world that versus let’s as a group work with our customers in the community and the world and academia and come up with the the best outcome because it will be surpassed in future.

Absolutely absurd. Yeah, no, I love that that comment. And one thing I wanted to add there is, is that, you know, the path to impact and the time to impact the machine learning model in machine, any progress. And General is extremely short and a grand scheme of things. You’re talking about something that was in the academic world. People write papers about, you know, in January of a year could be by, you know, by by the end of the debate admitted that same you could be in production by people using it, like just this kind of like unheard of writing in terms of scientific disciplines, writing academic papers about and that having impact on people’s lives in new products within months.

Not not we’re not talking about years or decades, which is a typical thing that in a lot of disciplines you think about advances in life sciences, but at times it has an impact on diagnostics or or it’s just a long time like the future. Same thing in physics and chemistry. So I think many people I think generate for something that’s in production in March. You know that, right? So having this title of what the researchers and getting to see is really important.

And I think it’s a beautiful opportunity. Like, I love that people are coming because the dangerous thing is that if it only lives in academia and never makes it, if the same people that build, you know, take the concepts to the next level, don’t get a chance to actually be a part of the implementation of them, how do the how do we learn, you know, other than waiting for the next academic to come along and evaluate and analyze?

And like you said in the past, it would be a decade before you would see the results, you know, necessarily now that you can literally in academia work towards a goal, do you achieve your plan, evaluate, take the hypothesis, and now you can actually enact that hypothesis.

And as a commercial business, I think this is really, really cool.

Yeah, thank you. I completely agree. I couldn’t agree more.

So so, you know, one, before we close up, Lewis, I’d love to hear your thought. Eighteen year old Louis Sasi decided he was going to school, No one.

Did you imagine you were going to go to school as long as you did? When did you build your plan and when did that? When did when did today become part of that plan?

Well, that’s said you give me the goosebumps here. So just a quick personal company. I grew up in Brazil. I went to engineering school in Brazil when I was 18. I was an electrical engineering student at the University of Sao Paulo. You know, at that time, I, I definitely like I really liked research, was involved in some research, but honestly never thought that first I’ll become a professor, that is all. And then even though I would say that I had thought about starting companies at that time but never ended up not doing it because of those, I got into the academic world and research and, you know, left to Brazil to go to IBM Research to work on this machine that was working life science.

And after that I went to so that was very, you know, taking the next and the next opportunity. So where did that plan come together? I don’t think there was. I don’t think I was ever a point where the whole plan came together was I’ll follow, you know, the flow.

But I always had the North Star that what gets me up in the morning is intellectual excitement and working with people that I can learn from and admire. And, you know, academia is great for that. And at Oxford now, it’s been great for that, too, because, you know, it’s been a different dream to have this kind of team they were able to build here.

So I I hope that we find more losers in this world. You took kind of. Thank you. Well, thank you for the conversation. There’s been a lot of fun, and I hope to chat with you again.

So, yeah, absolutely. I’ll be excited to watch the growth of the team, the organization, your customer base here. Some of the stories. We’ll get caught up again in future.

All obviously of links down in the show, notes for folks that want to find where if they if folks want to contact you directly, Luis’, obviously they can go to Octoml.ai, can we have that? But what they want to reach out to you directly, what’s the best way to do so.

Yes, you can just go ahead Lewis at OK, time out. Right. I listen to him outright. You trust me and I’ll come back to you. Looking forward to hearing from your audience.

I also want to congratulate you and thank you for being an amazing intellectual who doesn’t use their university address when they run a company.

It’s I know there’s a beautiful pride in the Stanford edu or the University of Washington that it’s always amazed me to see someone use like a three year CEO of a company and they still use their university email as their contact.

And like you, you should be proud of everything is OK to email is the thing to be proud of everything.

You’ve got to take it down to be proud of. But thank you.

Yes. I’m very, very proud of our time out for sure. Yeah. This email address will be the only to now. We’ll be there for a long time. So this email address will be valid for a very, very long time. I’m very proud of it.

So judging by you and your team, I very firmly believe it will be so. Thank you very much for the time today, Louise.

Thank you. Thank you again, Eric. Wow, there was there’s a lot of fun.