Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Sponsored by the Shift Group - Shift Group is turning athletes into sales professionals. Is your company looking to hire driven, competitive former athletes? Shift Group not only offers a large pool of diverse sales candidates from entry level to leadership – they help early stage companies in developing their hiring strategy, interview process and build strong sales cultures that attract the best talent for early stage companies.
Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing
Rob Carpenter is the CEO and Founder of Valyant AI, the first Artificially Intelligent “Digital Employee” to work directly alongside employees in customer facing roles. Valyant’s AI “Holly” works in fast food restaurants to greet customers at the drive-thru post, answer questions and take food orders. The revolutionary nature of this technology is that it pulls AI from being a hidden back-office tool, to something that feels like a real staff member, which humanizes a brands personality and brings the AI experience front and center to a physical location.
We discuss the power of their technology, the ethics of AI and the effect on jobs, plus how to empower people with technology and in the startup ecosystem. Another great chat that is a must-listen for founders everywhere.
Hey there. Welcome to the DiscoPosse Podcast, and this is one of those fun ones because you actually get to hear the really fun technical snafu that happens right in the middle. But it doesn’t cut into the conversation, which is one you’re going to enjoy from Rob Carpenter. He’s the founder and CEO of Valyant AI, which is something that’s really, really cool because he talks about the idea of AI as a digital employee. This is especially being used in the area of conversational AI in fast food ordering.
So really, really cool. In fact, I bet you’ve already used one and you don’t even know it. And speaking of conversation, you want to have a great conversation. Let’s talk about data protection. I know it seems an exciting some days, but you know why it’s unexciting because you need to make sure that you’ve got Veeam to protect your assets. And that means everything from your On-premises world to your Cloud to your digitally native experiences that you’re running in Microsoft Teams, Office 365 and there’s many more neat things that are coming, so hang on tight.
You’ll see lots of good stuff. But let’s save the conversation because no one wants to have that Monday morning conversation. What app to the app? It went away this weekend and we can’t get it back. That won’t be a problem if you use Veeam, so go to vee.am/DiscoPosse. They are the leader in data protection and real true anywhere, always on availability for your application. So get it done. Go to vee.am/DiscoPosse. See what it’s all about. Speaking of protection, remember that as you’re moving around and you’re on the road, or even if you’re just trying to protect your identity and protect your data in transit, the best thing you do is use a VPN.
I know I use one, especially for not just day to day stuff, but being able to make sure I can do testing against my services from different parts of the world to see what the behavior is and what latency is. So whether you’re an application tester or whether you just want to make sure that you keep your identity safe, you can use ExpressVPN. I’m a fan of the team and love the product. So the easy way to do this, go to tryexpressvpn.com/DiscoPosse.
I make it really super easy by just naming it after me, but go check it out. And one of the places you should make sure you do it. Don’t go to coffee shops, get your own coffee, go to diabolicalcoffee.com and while you’re doing that, strap in. This is Rob Carpenter, the founder and CEO of Valyant.AI, and this is an absolute must listen. He’s a fantastic human. We talk about EO, we talk about Valyant, and we talk about a lot of things. Enjoy.
Perfect. My name is Rob Carpenter, the founder and CEO of Valyant AI. And you are listening to the DiscoPosse podcast.
Alright, I feel like I should have, for this one, I should have your platform introduce us, Rob. Because first of all, I’ve listened to a lot of content, so I am excited by what we’re about to discuss. This is something that’s near and dear to a space of study that I’ve been in and looking more around the business side of it and the idea of conversational AI, I’ve been lucky enough to have a lot of great folks on the show who are in the space and it’s just so exciting.
It brings interesting emotions when we talk about the advantages and what the potential displacements are. So there’s a lot of really good stuff that I’m going to love hearing from you, in your real first world and first person view of it. So before we get going, Rob, if you want to give yourself an intro for people that are new to you.
Yeah, thank you. I appreciate it. So just new to me. I’m originally from Alaska, so I grew up right on the Bering Sea at the top of the Aleutian chain. Probably one of the more random background you’ll hear out of somebody.
That’s a first. That’s a first. Definitely a win.
And we literally have like grizzly bears roaming around in our backyard and we could go out and fish from the bank and catch 20, 30, 40 pound king salmon. So this is a very interesting life, but very early on, I really had a big interest towards entrepreneurship and starting businesses. I just kind of looked at the people that are living the life that I want to live, other than astronauts, what do they do? And almost every one of them were entrepreneurs, people who had built and founded companies.
So I read Rich Dad, Poor Dad and started to kind of get an idea of how a different part of society work that I didn’t fully understand and ended up getting an undergrad degree in entrepreneurship. Ended up in 2010 out in Denver, Colorado, got an MBA with a specialization in enterprise technology management, founded a mobile application development company, did my first M&A transaction ever. Acquiring a company in India took a year and we literally run into problems because we are using the wrong type of ink on our paperwork.
So there’s a tremendous opportunity, grew that company to seven figures in revenue. But like anybody listening to this podcast, I mean, service based businesses are just really hard. You are constantly out hunting and killing, and you’re only as good as your current project portfolio that you have. And it was exhausting. And so when I ultimately came up with the idea for Valyant AI, I was just really excited to transition into a product based business. And so I’ve been running this company now for five years is making that transition.
Wow. And this is a great place to start, Rob, because by the time you can say what you’re doing. You have to have been doing it for a while when you’re in the product world, especially one that’s in the area of AI, and you’ve chosen your specific, targeted customer niche, which is the right thing to do, because too many people, you can get big eyes at the buffet, as they say. It’s very easy to think of too many use cases. But five years in now, when was kind of the first time you felt like you could really go to the world and say, we’re here?
Like this is something that takes a while to develop to even get to that MVP kind of customer ready environment, right?
I mean, you talk to anybody that’s in the conversational AI space, and there’s a little bit of puffy in your chest for a few minutes. Then there’s a little bit of actual bonding, and within 20 minutes, you’re in a therapy session. It’s amazing how quickly you end up in that space. It’s hard. And I think we’ve been at it, like I said, for five years, we’ve seen a lot of companies come and go. We’ve had our own serious kind of soul searching. Do we need to look after another industry?
And I think conversational AI and maybe to some degree, AI in general is just so hard because you can do proof of concepts or really simple demo fairly quickly. I mean, literally, in a weekend, you could put a demo together. But then when you actually try to bring a product to market, it is just crushingly and painfully hard to get to a true, fully functional, especially for what we’re trying to do. We’re trying to emulate an employee. I mean, it’s hard enough to get Google Home to understand my wife when she asks for a music request.
Let alone something that as capable as a human. When did I think we were going to be there? I mean, at any point you ask me, I’m like we’re three months away. We are so close. Just another three months and then another three months and then another three months and a painful statement that has always stuck out in my mind. It was either the CTO or the CEO of SoundHound said, it takes three years to realize you’re ten years away.
And so I desperately hope we’re not ten years away now we are in market. We have a product. We are automating orders today, but like anybody in the AI space, we do have human in the loop backup support. And so the question really is, how fast can we reduce the reliance on those humans in the loop and get to a point where it’s just pure AI without any outside support?
This is the real interesting thing. And when we talk about what it is that you’re doing, it’s an experience that will be viscerally understood by people because they’re going to know what it’s like being on the other side of that little box. So Rob if you want. Let’s give a bit of a walk through of what Valyant is doing and where your first customer use cases are.
Yeah. So when we initially came up with the idea for what became Valyant and kind of early on, knew we wanted to pick one industry. I mean, it’s good conventional wisdom. Pick up each head, own it, and then strike out into other industries from a place of strength. And so I sat down. I kind of came up with my own rubric of ten to 15 categories and then identified roughly 20 different industries. We were at that time a future solution in search of a problem. So it’s like, where could this technology be applied?
And so we ultimately settled on the restaurant industry. Now there are some cons to the restaurant industry that people are familiar with in terms of low margin, a lot of price pressure, things like that with things like point of sale systems, there’s a lot of pressure and commoditization. So there are some challenges to the restaurant industry. But relative to some other big market verticals, like take retail. For example, the nice thing about restaurants is you tend to have a more limited domain set, especially as you look at quick serve restaurants or fast food.
You might have 75 to 150 different menu items, a couple of permutations on there, and then maybe a few hundred other key terms, catchup, fork napkin, things like that. But if it’s a very limited domain set. And although I don’t always agree with everything, Kai-Fu Lee says, if you read this book, AI Superpowers. He talks a lot about the importance of kind of a vertical integration approach, at least in these early stages of AI. And I do fully agree with that. And so we decided that restaurant was really where we’re going to make our mark.
And so we’ve pretty much been super focused on it in five years. And we’ve turned a lot of companies way and a lot of other verticals. And we’ve just tried to stay hyper, hyper focused on this one key space. And then for us, specifically, what we look at, where we see the most demand from the market is around drive-thru automation. So there was interest prior to COVID, but over the year and a half of the kind of first round of COVID drive through became one of the most important areas inside of the entire US restaurant industry.
And you’re talking to $865,000,000,000 per year market. A lot of the quick of restaurants we talked to, they were up 30% year over year. So you look at how painful it’s been for sit down, you know, high-end, fast casual. Those restaurants all suffered under COVID, fast food boom. I mean, they did huge business, and 90% to 95% of that business was done through drive through. So it was just serendipitous place for us to be having three years of kind of wind in our back at the point that all this came about.
And I was on a call this morning with the restaurant operator, and they’re already seeing another big surge in terms of demand for drive through as we go into kind of the Delta variant of COVID. So we hyper focus on that one specific use case. We manufacture our own hardware. We stick it inside the restaurant. It hooks into the technology that the employees use for their headsets to talk to the customers in the drive through. We currently process everything in the cloud. The goal would be in a year to move towards edge computing so we can do everything on site, even when the Internet goes down.
And then we have our own proprietary speech to text engine, NLP engine. And then what I refer to as the natural language generator or just kind of more vaguely, just the logic engine. It’s kind of a common sense brains of the system. So we’ve developed all those systems in-house to specifically address this one use case.
There’s so much good stuff I could do an hour on each subset it to you. So first of all, just the fact that we refer to QSR. I love this quick serve restaurants because fast food is like a pejorative at this point because you just think of just negative connotation of food. But as an industry like you said, the address of a market is fantastically huge, especially now that people are moving to this idea. They want to get out of their house, but they don’t want to be sitting in a restaurant in a risk situation.
So it’s kind of a really good mix. But quick serve restaurants like you said, they they’ve got a specific target, and it’s a very repeatable thing. And the first thing that I think of and I know people are listening and thinking, isn’t this going to get rid of somebody wearing that headset? And that’s why I want you to allay those fears, because I know a lot of my own reasoning that I do not believe that. But first hand, you’re in this, that’s got to be, I’ll say, a common if not a top objection when you talk about the value of what you can do with Valyant.
You know, and for whatever it’s worth Interestingly, when we talk to end customers, employees, the brands, some of them bring it up. They’re generally not worried about it. It tends to be all of the media interviews, and it’s about 100% that it comes up. So I’m glad we’re addressing it right out of the gate, because it is a very important topic for us to touch base on. So specifically, what we’re talking about right now is labor repurposing. So the person that’s in that order taker position. And this was something that I learned along the way, 90% of all QSR restaurants across the country that order taker is also doing sometimes three or four additional jobs to order taking.
So it’s not a dedicated position. So really, what we’re doing is we are automating a task, and that task may take that order taker 50% of their time, but they still have to process payment. They still need to fill up soft drinks. They still need to clean up after spills. They’re being pulled in multiple different directions simultaneously. We talked to one at a top seven QSR brand and their order taker on averages, doing five jobs. And so the critical thing for them is like we just need to automate this task because that person’s life is really hard.
Turnover is really high, and there are only certain subsets of their employees that they can even put into that position. So it’s a really critical challenge for them to figure out how to backstop all those employees and just make their lives better. That, I think is a kind of microeconomic view of the situation. If you also step back and look macroeconomically at the service and specifically restaurant industry, there’s 1.4 million unfilled positions in the United States today. So even if we were taking a whole position, which were not just task automation, there’s still not even the people to do those jobs.
I mean, you go anywhere and you’re going to see help on these times on pretty much every single business. Look at the airline industry, especially as our economy that starts to recover over the summer. It was a nightmare. I mean, look at Spirit Airlines, right? I mean, those guys practically went bankrupt because they had to cancel, like, three weeks worth of flights because they just literally didn’t have people to work. Alaska Airlines, they’re near and dear to my heart, they were forcing executives in Seattle to go out and do baggage handling work on the Tarmac executive.
You’re talking of VPs of marketing or chief operating officers hauling luggage because the labor shortage was so acute for them. So we’re really helping these restaurants because they cannot find the labor and on average, within the industry turnovers 150% to 300% per year. So you have a really hard time finding somebody. When you can find someone, you’re refilling that position one to three times per year. And if they do stick, that person’s being asked to handle five different jobs simultaneously. And that is a perfect application of AI or, more generally, probiotics.
When you don’t have enough people to go around, the job is monotonous. It’s dangerous. It’s boring. Automate it. Let humans focus on the things they’re better at than doing something that is just a repetitive task over and over and over again. How’s that?
That’s perfect. Number one, you’ve affirmed my belief in that we are not removing roles were, in fact, elevating people into more opportune roles. And I love that such perfect examples. And thank you for bringing numbers to it as well. We can see the impact there. It’s frightening, right? People think of this idea that we’re like, of course, last night, as we’re recording this, the news hit that we’re creating the Tesla Bots. And so immediately there’s this, somehow that Elon is looking to get rid of the citizens of Earth and replace them all with robots.
And it’s, like you said, it’s such a media frenzy reaction, just because it’s something to talk about that they know can trigger someone to listen. And I guess when you’re in that business, that is your that’s your business is getting people to listen, getting people to read. And these kind of tropes are so easy to latch onto. But like you said, when it comes down to it, the people who you’re talking to that are going to use these systems in their own environment that they’re working in, they’re like, thank you, Rob. Bring it on.
Yeah. And I think, too. I mean, we got to get a little more nuanced with things as well, because the innovation has always been part of human society. It’s woven into the fabric of the American psyche. What we need to be concerned about, which is why I think this question is important, and we should talk about it is the pace of innovation. If we look and we step back and we say 100 years ago, turn to the last century, something like 95% percent of the entire US Labor Force was involved in the agrarian industry.
And I don’t know about you, but I really love going into my office and sitting at my quiet desk with a warm cup of coffee or playing Ping pong with my team or grabbing a beer for a happy hour versus being out and working with livestock or out picking vegetables. Not that there’s anything wrong with those types of jobs, and that’s obviously critical to our survival as a species. But if you look at where we are today, it’s something like 1.3 or 1.4% of the entire US Labor Force is still involved in the agrarian industry.
So we have more food than we’ve ever produced in the history of human civilization. And we went from 95% of people involved in that to one and a half percent. That is innovation. Innovation is not bad. That has made a lot of people’s lives a lot better. I think, where we have to get concerned. And I think this was maybe a bigger fear five years ago. But it’s just the pace of innovation too quick, because there’s a natural attrition of jobs every year and the creation of new jobs, like, 20 years ago, who would have thought social media manager would be such a critical position and how it is. So like that’s innovation.
If the pace of innovation is too fast, that’s when it creates problems, because then you’re losing too much of the workforce before you can replace it with new jobs. And I think that big fear does come down to some element of conversational. Ai, automating, service based work and white collar jobs. And then I think the other big part of it was everything going on with self driving cars, for example, like truck driving. That’s the number one profession in 26 States in the United States. So if all that gets automated and then all customer service work gets automated, that’s a big problem.
But going back to the Tesla Bot and what we’ve seen over the last five years in these kind of AI updates of where self driving, we’re still not even at level four. So things that we thought would be easy. Elon Musk was promising we would have it in 2017, still aren’t even really ready in much of a real way for a beta consumption. And so I think that’s maybe alleviated some of those concerns. Are these things coming? Yeah, absolutely. Will there be self driving cars in the decade? We thought it out.
But by stretching out the timeline for innovation, I’m actually significantly less concerned now because, yes, jobs will be destroyed. But the new jobs are going to be created while we wait for things like self driving car to hit level five and actually be able to work in a place like Alaska where there’s snow everywhere and there’s nothing really tangible for the cameras and the light are to really play off of. So we’ll get there. It’s going to stretch out a lot more than we thought it would five years ago.
And that’s going to give us plenty of time, I think, to replace those jobs with new jobs.
And in a way, you bring an interesting point, I think, isn’t the fact that we talk about the potential innovation. It becomes an antibody to the removal of value of the current human counterparts that are doing the stuff, the fact that we have these discussions and we talk about the potential to reach the specific areas that we’re aiming for, that we’re not there yet. It gives the industry and humans a chance to kind of go, if this is coming, we better start to innovate processes and companies.
And the way that we work like, I’ve never known anybody that automated themselves out of a job. They’ve automated themselves into a better opportunity almost every time. There are, very certainly, some specific roles that, like mechanical robotic process automation. That type of stuff did replace some things. But again, if we looked at the numbers, it’s such a small portion of the global industry and the ones that it is. In fact, it was literally killing people to do this work.
Right.
This is stuff that shouldn’t have been done by humans. We just had no choice because we weren’t born with the machines. So that’s an interesting thing.
I think the perfect case study for this is right at 100 years old. And that was Henry Ford and the Model T. And he was one of the very first kind of industrialists to bring in this idea of automation and mass manufacturing. And when you have one manufacturing line and you start to automate 20% or 30% of that mass manufacturing line. People get scared. And he had employees. He had family members. He had people from the community that were literally picketing outside of his factories because automation was destroying jobs.
This is 100 years old. And what happened is that by automating things, he was able to bring down the price of the Model T so that more people could afford it. So then what happened? More people bought it. So he opened a second line and a third and a fourth and a fifth and a sixth. And before you know it, you’re employing exponentially more people than you ever employed before. And you’re doing it because you’re becoming more efficient with your use of capital. And that’s exactly what’s going to happen here.
But that doesn’t mean there’s still not concerned in the short term, just like there was 100 years ago when people were picketing out in front of this manufacturing facilities.
The other thing as well is the acceptance of the new innovation becomes a baseline pretty easily the point leading up to it seems like a forever moment. Like my example, actually, I used this in a presentation recently at work, and I said, like, you know, Elon went to first principles when it came to space travel. And we said, like this, everybody told him it couldn’t be done. It’d be silly to do it, just even in that specific one area. He then said, I’m going to land the rocket, not just going to send it up.
I’m going to land it on a launch pad. And they said, this is crazy. It can’t be done. And then one step further, he does it repeatedly. And now Jeff Bezos goes to the edge of space, and he lands the Blue Origin rocket nose up. And not a single person said anything about it, right?
They were just like, yeah, that can be done now.
Yeah. Like, it was like, if it hadn’t landed that way, people have been like, whatever dude. They would have been angry at him. And so it allowed us to move the conversation to something new, which was okay now that we can do this repeatedly, what can we do with this availability of technology? And now this is. And there’s an interesting thing as well. People said, well, we’re lining in the pockets of Elon as an example. And look, I’m not going to go. I don’t want to have a discussion of the weight of the billionaire or whatever the challenge there.
The result of the work that they’ve done has resulted in the US government saving a $150,000,000,000 in spending while still sending objects to the ISS now. So then it has had a significant benefit to the entire, every citizen of the United States has benefited as a result of that. So it’s definitely there.
And this is going to be a whole new world for innovation, right? I don’t really even think it’s a question of if anymore, within a few years, the SpaceX, Heavy Falcon Rockets, they’re going to be landing people on the moon. They’re going to be landing people on Mars. And by doing that, you’re going to need habitation, you’re going to need food, you’re going to need water, you’re going to need rocket propellant, and SpaceX will do some portion of those. And the companies that come behind them will do some portion of those.
But they’re not going to do all of them. They probably won’t do more than a few fractions of single digits of everything that has to be done. And so it literally opens up entire new worlds from an innovation standpoint, from a work standpoint, from an economic opportunity standpoint. And so, hey, are they automating parts of a rocket manufacturing process that used to be manual? Yeah. Is that reducing a few jobs that used to be there? Yeah, for sure. But they are now producing dozens and eventually hundreds more Rockets that could have ever been done before.
And through that process, opening up a whole new world of economic activity. Absolutely. That goes back to that kind of more macro economic view that economies are dynamic. You were meant to automate stuff. That’s been part of civilization since we invented the wheel that allowed us to do things faster and more efficiently, and that will continue to be part of our future.
So looking at, I apologize, my video is suddenly decided. Speaking of the amazing thing of technology, and yet somehow a simple laptop can’t keep up with humans and what.
I’ve been there. I get it.
What I love about what you and the team are doing, Rob is again, very quickly jumping to the human value and impact that you can have with what you can do. So conversational AI has had its really, really interesting adoption in a lot of different areas, and some people didn’t even realize like it starts mostly in text. But the voice conversational AI, where have you seen the challenges and the real wins in bringing this product to market?
Yeah, I think the core of the challenges I’ve kind of learned the space over the last almost half decade now is sort of the daisy chain effect. Conversational AI has multiple critical path things that all have to happen in a row. And if any one single element in that process has degradation, then everything after it is degraded. So let’s say just using kind of easy numbers here, you have five critical processes within a conversational AI system. If every one of those systems is just degraded by 5%, take speech to text.
If you have a speech to text engine that was 95% accurate, you were talking about a world class product at that point, but you still have 5% degradation from a 100. If you have four things after that for a total of five and each one is accurate, you’re still talking about an end result that’s wrong 25% of the time. So you have to have every single one of these elements operating at 98, 99. 99 and a half percent accurate so that you can achieve something like 90% total success of orders, in our case, over the course of the entire interaction.
And so that’s the extremely hard problem. None of it can be just good or good enough. Literally, every one of your core elements basically has to be world class or close to world class to get to a point where you are automating the vast majority of the orders that flow through a system. So I think in a nutshell, is the hardest part of building a conversational AI platform.
Yeah. And this is the challenge. Like you said, the demos are easy to spin up when it goes well, it’s easy to get to a very simple MVP, but I’ll go back if anybody’s watch Silicon Valley sort of a famous thing, and it comes up with this visual. We can take pictures of food, and I can show you what the food is. And he takes a picture of the hotdog, and it says ‘hotdog’, and they’re like, yeah, we did it. And then the next one is not hotdog.
So if it works, it works well. But then very quickly the edge cases become core use cases, especially in conversation, because it’s such a nuanced thing to do with.
Yeah, the entire product is edge cases. There really is no happy path in these types of environments where we’ve seen the most customer facing conversational AI adoption is when it’s really like limited term or just one meaning you ask Alexa a question and it answers and you’re done. And for those guys, they’re effective on kind of world classes. They can do one round of context follow up. Our average interaction with the customer has a minimum of ten, and we can have some that are 20 or 30 in terms of asking, answer, asking and carrying on a more true type of conversation of what you would expect from an employee.
And so you have to carry the context through from all of that. You have to carry the nuance through from every one of those. Every single time you request a new response from the customer, you are opening yourself up to an edge case because they might say something like “nah”. You and I, we understand “nah”, that means no. But let’s say simultaneously the customer said that kind of quiet or their car radio is on or like we had last week, there was a leaf blower in the background.
And suddenly when speech to text treads to transcribe ‘nah’ that comes back as ‘yeah’. So you have in one moment completely inverted what the customer said and you might be 15 turns into a conversation. And the AI is an 100% accurate. You missed one small word. And now suddenly you may have failed the entire interaction of that conversation and taking the conversation off of a cliff, basically. So it’s an entire business of edge cases and the cliffs surrounding the start and end of the conversation are steep and painful if you don’t get what the customer is saying perfectly.
You brought up a really great point and we talked about the nuance. Even we say, we all speak English, everybody I should say. Even that just the fact the arrogance that I would automatically go to we all speak English. What the challenges is the we’ve got sort of dialect. We’ve got accents, nuances of the human language to then add it to the fact that you’re ordering things that are called like, can I get a double Foogly Moogly? This is not even easy stuff to be able to translate, right?
No. And that’s still on the speech to text side. I mean, there’s other things like, can I have the two for four? It’s like, okay, well, what’s the logic that goes into that? Is there two chili dogs count for that? Is the two the price or the quantity? Is four the price or the quantity? And so there’s innumerable number of amalgamation of how these restaurants will package their food and their condos together and allowing the system to intelligently understand the core basis or principles, rules in every one of those situations.
And then in something like, can I have the two for four? Basically, each of those words in there are super critical. And so if you just miss one word or mistranscribe it, it can wildly change the output of what the customer was actually intending to say to you.
And just even, such a great example is it two four four? Or two for four? Like, there are so many words sets, which I even find that I’ve tried to use speech-to-text with simple dictation. And it just creates this giant run-on sentences. And I often thought there’s got to be some way, some shortcut that can be used to say period, comma.
But when you say them, it writes out the word and you can see. And then what happens is the frustration drives me to feel that the tech is failing, which I know it’s an unfortunate human reaction, but it’s actually, I just haven’t figured out how to best interact with it.
Right. We are seeing I will say that element getting better. I think this job and building this company would have been so much harder, bordering on impossible technology aside, a decade ago, purely from a customer psychology standpoint, that was right around the time that we started seeing Siri, Alexa, and Google Home start to enter into the marketplace. Fast forward today, and there’s hundreds of millions of these units sold. And so everybody in one capacity or another has interacted with one of these systems or likely heard somebody else interacting with one of these systems.
And that is helping to start to kind of train customers a little bit more like in normal communication. We’re extremely fast. We tend to be a lot more vague. There tends to be a lot of nuance. It tends to be a lot of emotion and internal Ty and body language that all feed into our communication with each other. And I think people, as they’ve now gotten more and more used to interacting with these systems, they tend to be a little bit more halting, tend to be a little bit more direct, and ideally, if they can be a little bit louder and a little bit more patient, every one of those systems helps the accuracy of the system in terms of understanding customers.
Such a good point. And so this is a funny story based on that. The platform that I’m recording on, it’s called SignalWire. I actually had Sean Heiney, who is their chief product officer on this. Sean was great. And I started using the platform. One of the advantages is that it allows you to actually stream multiple sources of audio simultaneously, actually multiplexing audio.
The advantage to it is if you have four people on or if you and I talk over each other, we can talk over each other and it continues versus the, I’ll say, other platforms have the problem of digital cut off where as soon as one person starts to talk and then you and then they both start talking again. So this platform gets rid of that. However, when it starts to happen, we naturally accounts for it, like the people I talked to will stop talking if they hear me talk at the same time. I’m like, no, no, no. I was just sort of adding color to it like.
We can all talk at the same time. It’s actually fine.
We’ve learned to behave within systems that are common now. And like you said, no one really doubts. Hey, Siri, do this thing or hey, Google, do a thing. We’ve actually kind of, we’ve normalized it, which is kind of nice.
Yes, I would agree.
Now, on the technology side, you’ve talked, and if you don’t mind, I’d love to dive in. You talked about currently, of course, you’re sending data to the Cloud. That’s the easiest way to do this because you want to make sure is it the most computing powers there create the most viable centralization. It’s a great platform approach. But you talked about the move eventually to do more stuff at the edge. And that is important because we’re going to see more. You know, first of all, just the risk of power loss and data loss and other things could impact it.
But then you really open the doors to interesting, unique use cases once you can have a real full edge presence.
Yeah, it’s really critical. And we’re finding, at least within our industry, there’s definitely a lot of concerns from these restaurants. Some are in major Metropolitan areas and have fantastic high speed Internet, and a lot are in really rural areas with really bad Internet connections and even where we are now almost ready to go into 2022. There’s still restaurants in some cases, I know that are on dialog, and so in those situations, it really precludes you from being able to your product to market if you don’t have it capable on the edge.
So where we’re at right now is we just are starting in the more Metropolitan, more well connected areas, but it opens up basically the entire rest of the industry. If you can push it to the edge and you wait until the middle of the night and you push downloads and updates to the system and things like that to keep it current. And it’s a lot more from a kind of a device. It software management when you’re so distributed like that on the edge versus just having one core platform that’s in the cloud, that’s significantly easier to interact with and to modify, but at least for us and for our industry in our use case, that edge capability is going to be really critical for us in the future.
The other thing that’s interesting is as a founder and knowing that you’ve got to stay focused, how did you maintain that? You talked about, at the start, that you’ve actually had to actively turn away folks that have brought lots of hats? Rob, you’re doing this. What if you just did here? How do you maintain that real Pragmatic approach, especially not just because of you, but your entire team has to ultimately stay aligned on that vision of what you need to get done first before you branch out.
Yeah. I mean, I’d be lying if I said it wasn’t hard, and I think this is a problem that every entrepreneur and business owner faces and kind of determining their model, which is, are we going to have one sort of generic system that’s going to work well or work okay in a lot of different industries? Or do we just want to have an absolutely best in class product, but in the foreseeable future, it’s just hyper focused on one space. And I’m not actually a engineer. So I definitely come more from a business development operations type of background, and it’s hard to turn away a 500 billion dollar company that wants to talk to you about voice AI capabilities.
Generally, what I’ve done, which has been helpful for me, is I just throw out high barriers to entry for them, because for these big companies, it takes nothing to waste a startup time. This could be interesting. Let’s see if all those guys over there want to go and work on this for free or, nearly free for six months or a year, and then we’ll see if we want to do anything with it. So it’s been a bit of a self fulfilling prophecy to stay focused, because I have taken those meetings.
I have talked to those companies, but then generally, I just throw out high price points to them. And then in the back of my mind, I’m like, okay, well, if they pay this, then I can go higher. One, two, three people. They can focus on adapting our platform because at the end of the day, it’s just software, right? So it can be adapted to any industry. But it takes focused time and energy and concentration. And in pretty much every one of those situations, then the companies come back and said, like, okay, well, it’s not that big of a priority for now, and it works out in that way.
And it’s a way where we’re not rejecting them or leaving a bad feeling with them. We just kind of lay out the case, the background, the reason it goes into it and then throw a big figure in front of them and say, hey, if you pay this, we’ll do it. And I think especially right now within the conversational AI space. There’s so many people working on it. There’s so much going on. I think there’s a lot of excitement. There’s a lot of real technology, there’s a lot of hype, there’s a lot of smoke and mirrors.
And so it’s very choppy waters for companies to figure out how they want to navigate this process. And so by throwing that barrier up, it’s pretty much kind of kept everybody out and allowed us to just stay on our sort of happy path from a go to market strategy. That’s just how it made sense for me.
Yeah, it’s great. And when you talk about that, there’s a lot of folks that are talking about the space, and they have technologies versus like yourself, where you’ve literally chosen, you’ve laser focused on a use case, you’re delivering it, you’re growing with Lighthouse customers. You’re doing that really, really strong methods of don’t do B until you’ve succeeded at A versus people that are like talking about A, B and C, and then maybe dabble in D. But they can create a lot of noise for you.
I don’t want to call it competition, but how do you do noise reduction against that stuff? Because eventually your customers will be like, hey, Rob, some other people are approaching us because of course, you go to Google and you look up Valyant. And the first thing that comes up is not Valyant because somebody’s buying ad space above you, which is the first site you’re doing well is when people are buying ad space above you. So Congratulations on that.
Yeah. I’ll tell you what, ironically, we’re in a situation right now where customers are not a problem for us. So it’s nice if we just really don’t have to focus much energy there basically everybody in the market want this technology. And so I think we’ve done a nice job of sort of positioning ourselves out there. And so as I look at the top ten biggest brands in the entire United States, we’re talking to or working with half of them. And so these large organizations or finding their way to us.
And that’s been really helpful, too, because then I’m not trying to work my way up through cold calls or introductions or other marketing efforts and having to kind of work my way up the chain to somebody important that can actually make the decision and sign off on projects and determined to move forward. So I think that portion of it has been extremely healthy for us, but I might need to go look and see who’s bidding against us and put some energy in it.
The other thing is just as a technology side, it’s very easy to look at the wonder of what’s possible. And as you go and you take on like adding new features or adding new customers, and you’ll see the expansion into potential, like taking on this idea of moving more tech to the edge. It’s a real undertaking where you have to invest into it. So when you’re making decisions like that as a founder, what’s your thought process around, where you have to be 100% revenue generating versus how much can I put into the longer term growth and viability?
Yeah, I think, and I’m assuming here a little bit, but I don’t think there’s too many of us that are in this hardcore AI space that are really trying to bust new pathways into markets that have never existed that are hyper profitable because it’s just huge amounts of work and huge amounts of investment into the technology. And you have some level of just sort of carrying costs for every single customer. And so the more you can improve the platform, the more you can bring down those costs and improve your unit economics.
And so something like Edge, your hardware, those are decisions I think bigger decisions are. For how long should I keep trying to drive towards perfection versus focusing more on just trying to be profitable on a per unit basis? And I think at least from my perspective, I really view conversational AI as a true kind of customer service automation capability across dozens or hundreds of markets as a blue sky opportunity. So I would rather keep investing like crazy to get the product as capable as possible and then be able to push into as many additional spaces once we can transition out from a source of strength versus just trying to dig in on the unit economics and staying smaller and trying to make each one of those locations just a little bit more profitable.
So I think it’s a land grab right now. A lot of different companies have grab space in a lot of different industries. We have three to four, I think very real competitors that have good technology in our space that we’re actively competing with to try to grab land in this space. And I think we will continue to see this at minimum, for another five to ten years. And then I would expect conversational AI to start going through a similar type of market consolidation that you’ve seen in a lot of the other industries prior to this.
Yeah. And the interesting thing, of course, is because folks like you and I were a bit more aggressively focused on the the competitor space. And in the end, there’s such a huge consumer environment for this stuff. There really is. If you spend so much time focusing on the competitors, you get lost chasing them instead of chasing your business. And it’s so we always have to be mindful. But of course, the the inner nerd in me is always like, you know, where are we technologically aligned with somebody? And make sure I can always think about differentiation without being stuck on like, they changed their messaging again.
You can’t be attached to folks that are in a parallel space too much.
Yeah, I would agree. And I still think there’s some challenges and some education for the market as well. We recently ran into a situation where a company in our space was telling potential customers like, hey, we’re 90% plus accurate, and they’re just kind of leaving out that. But we have some people in the background that are fixing things on the fly to help us get to that number. And so the customer wasn’t quite as sophisticated enough to ask and the other company didn’t bring it up. And so there is still an element, I think of kind of smoke and mirrors out there.
This is a very unconsolidated, unstabilized market. It’s a bit of the wild wild west. There are no norms, there are no level systems to compare against. There are no independent third parties to verify capabilities and stuff like that. And so we see companies throwing out pretty stretched metrics relative to what we see, both in terms of what state of the art technology and when we test what’s their system actually capable of. And so that’s been kind of an interesting process of bringing this product to market and kind of navigating against the sales and marketing that maybe sometimes there’s somewhere between kind of disingenuous to just sort of withholding information because customer didn’t know what to ask.
Yeah, that’s a tough one. Like you said, especially when it’s a new technology and new space. No one knows that there’s a Mechanical Turk hiding behind the scenes and all that stuff.
Yeah. Google spends billions of dollars developing their Google home system. And I heard a number at one point that said they still have up to 30% of interactions being reviewed by a human. So it is the very dirty secret of the industry of which everybody that’s in it understands crystal clear, and those who don’t understand it and who are trying to figure it out and who are trying to find a way to take advantage of this technology. They often find maybe murky, kind of maybe feel like they were a little misled.
And so I think there needs to be a lot more transparency on our part. And as a technology group as we bring these technologies to market to be real clear about where things work and where things don’t work.
I don’t want to put a limit on the use cases that you’ve got because I’ll say it’s more focused and that you’re less likely to bump into the need to do real deep like sentiment analysis. There’s obviously points where that would come in. I would imagine.
Yeah. For certain.
When someone starts yelling into the speaker like Samuel L. Jackson, you’re probably, time to make sure that somebody taps the headset and get to listen to this like versus some of, like the call center AIs, they’re much more. I feel I’m about to say it, they’re much more challenging to implement because they’re specifically going after doing stuff like continuous sentiment analysis to gauge the health of the call because they’ve been a different long form conversation to attack.
So I don’t mean to say it’s harder. It’s a different challenge that they’re solving. Yours, where do you see the variability and what you can start to do with some of the deep capabilities in NLP and actual analysis?
Yeah. I mean, again, going back to, I mean. We are taking in live conversations on. The vast majority of the conversations we are taking are being handled entirely by the AI, and it took us a long time to get there, but that is a very real product with very real capability. I do believe what we’re doing is exponentially harder than something like sentiment analysis. That is extremely valuable to those companies credits. They’re probably making a lot more money than we are as we’re trying to grind out this hard space, but think about it with that sentiment analysis example.
If it doesn’t work correctly in one to ten cases, does anybody know? Does the end customer know that they care? Does the call center rep on the phone that they know that they really care? Maybe if the sentiment picks up, the call is going really bad, it goes to pull in a manager or they just use it to monitor it after the fact, but it doesn’t stop the core capability from happening. The customer and the call center up still did their call. Could it have been better?
Probably. They still did their call with what we’re doing and with what other companies in our space are doing. If we miss something, the whole call goes off the rails or theoretically can go off the rails if it’s not recoverable and it’s front and center with the customer. So it would be more accurate to say that the call center person is actually an AI trying to carry on a conversation with the customer. That’s much harder than just passively monitoring stuff and tagging it for data or analysis or flagging it to pull somebody in because it doesn’t fundamentally break the core product.
If it doesn’t work, if we go off of one of our edge cases, it fundamentally breaks the product.
Yeah, that’s the interesting thing. And anybody would go through this I just think of the last interaction they had with somebody through an order process at a quick serve restaurant. Odds are the last thing you did. We as humans, made a mistake doing the order or when they do the read, that’s why they do the read back. And I love it. It’s like, do you want to? Actually, no. Let me go with number two instead of number one. And then it’s like, okay, we’ll do that. Is there anything else we can help you with?
Okay. What I’ve got for you now is X and like, that rapid validation and the fact like, there’s so much that can go wrong in the seconds leading up to that, they’ll be like, actually I want number two, not number two, number three. I mean, yeah, number three. Just writing those words down. Yeah. Big deal. You transcribe it. That’s basically a glorified transcript. But actually taking that and turning it into an order.
And responding intelligently in that situation. And maybe you could parse all of that and you got what you needed. But maybe you have to parse all of that. And the customer was still ambiguous. We had a situation when we were working with the restaurant chain here in Denver called Good Times, where we were automating breakfast orders. And so we had a one customer I remember came up. He was like, hey, could I have six sausage burritos? No, no, wait.
Actually, I want three bacon burritos and then sausage burritos. And so it’s like, do you want nine burritos? Do you want six burritos? There’s a lot of ambiguity in there. And so then the system also has to have context. And so that’s an area where we see the company spending billions of dollars, and they’re just scratching the surface of context. Yet for any company that’s trying to do customer service automation, where they’re directly talking to a customer, you have to be able to manage a tremendous amount of ambiguity and related context and then try to respond as we talked about early on with the daisy chain issue perfectly every time.
And you might have a minimum ten turns back and forth, and all you need is just one of those to go wrong. And then the entire thing could be a failure. And so it’s a very painful and exacting process to get to a point where you have a product that is kind of widespread, adoptable and scalable within the industry.
It’s an amazing time to be in this world, though, that we can do this, right? Like to think of the technology that enabled you to do this and that you and the team have chosen to take it on and your succeeding. What a fantastic world, isn’t it?
I love it. I mean, not to be corny, but, I mean, I still get goose bumps when I review sessions, and it’s just perfect all the way through, because I know how hard and painful and grueling that work has been to get to that point. And so when I can sit down and listen to a minute, two minutes, two and a half minute order, and everything flows perfectly throughout the entire order. It’s like, oh, my God, it’s live. It’s real.
It took us a long time. This is a product. It’s such an exciting experience. And truly, I couldn’t be more excited to be in the AI space because this is ultimately going to be the brains of everything. Right? And I think I don’t see it as much as I would like, but there should be a lot more coupling, I think, between robotics companies than AI companies. And if we throw a sort of full circle here, back to the Tesla Bot, there’s maybe one or two Nobel Prizes that’ll be one by an engineering team that can actually pull off what Elon Musk talked about yesterday.
But let’s say that they do. It’s still an extremely capable system that is going to be a paperweight unless it has the brain of an AI behind it. It has to know to be able to carry on conversations with people around it. If it’s about to drop something on somebody and somebody says, stop and yells it at the robot and they’re in an echo-ey warehouse. It’s got to pick that up perfectly the first time and do exactly what was requested. And customers, as we found, just because they’re so ambiguous, they’re not going to write a script for a robot to go and get their mail or go buy them a gallon of milk.
Must talked about like, the system is going to have to be intelligent enough. Somebody’s going to say, Go get me milk. And the robot is going to have to intuitively know what go get me milk means, which is like, turn around, walk to the door, open the door, walk to probably a car, get into the car, drive to the grocery store, walk into the grocery store, go get the milk, pay for it, and then repeat all the steps to get back. And that is where AI lives.
And so it’s just such an exciting time. Industry wide. It’s just in its infancy. It’s going to be really fun to watch this technology evolve over the next 10 to 20 years as it just continues to get smarter, more sophisticated, and starts to proliferate into more places that ultimately, I think, will make our lives better, both as consumers and as employers and his coworkers.
And I want to tap into something that, as technology, amazing. Our place in the world to be able to do this is pretty fantastic. Yeah, I was going to say, what are the risks that we have? But I don’t want to take a dark turn. I want to tap into something else that I saw in your bio. You’re a member of Entrepreneurs Organization, so EO has come up a lot. I’ve had, when you do a couple of hundred of these interviews, you eventually bump into this common things. And EO comes up a lot.
I love to hear. Rob, how did you discover this? And what’s been the value that you found from being a part of that organization?
Yeah. So for anybody listening, who doesn’t know, EO stands for Entrepreneurs Organization. So it’s basically an international networking group organization where entrepreneurs come together. So here in Colorado, we’ve gotten extremely healthy chapter. I think we’re 160, maybe going on 200 people that are in our organization. And every single month, they’re putting on different events. So a couple of days ago, a guy that owns a brewery here in Denver, gave anybody who wanted to a tour of his brewery and gave everybody free beer and talked about the business and the economics of it, things like that.
There was a lady that owned a bunch of restaurants. She gave people tours of her restaurants, explained how they work. She had a very cool kind of collective thing going on where they renovated an old warehouse, and they had, like, a dozen of different restaurants inside there. And you go sit at any restaurant, you can get food from multiple restaurants. Talked about kind of where the evolution that she saw restaurants going. At one point, I think two years ago, we brought in a guy from the military who was the one that found Saddam Hussein.
And he talked about all the work that he had to do to be able to kind of track down where Saddam Hussein was. So it’s just fantastic and intellectually exciting to be around similar people that are trying to grow companies. It’s amazing how many times we all run into the same problems. So to be able to chat through those problems, share experiences of how you’ve overcame those issues, could be partners, can be fundraising, could be legal, can be challenging customers, because ultimately, at the end of the day, it is lonely at the top of an organization.
You don’t want to complain to your direct reports and bring them down. You need to kind of sometimes bottle some of that stuff up, and you just try to keep people kind of excited about the mission and the goals and pushing forward. But then you really do need people that you can lean on have similar experiences that have been what you’ve been through. So the tours, the networking, the speakers, like, those things are fun. But I think the core of EO is what’s referred to as forums.
And so within our bigger chapter of 160 to 200 people, it breaks it down. And everybody gets put into a forum of about five to sometimes ten people kind of on the bigger end of the spectrum. And you get together once a month. And then everybody talks about, like, hey, here’s what I got going on here’s. What’s working here’s, what’s not working. You can give each other experience shares. You can lean on each other. And then even within our forum, we’ll bring in speakers. And it could be speakers to give you education on business, life goals, they could help you with relationships, retirement planning, succession, things like that.
And so it creates this community of people that know what you’re going through that can help you. And that can support you, be it in business or be it in life. And then because it’s an international organization. If you travel to or pretty much any kind of major city, globally, there are chapters of other EO members there, and I’ll regularly get emails of, like, an entire forum that are flying out to Colorado, and they’re like, hey, if there’s anybody local that wants to meet up, let us know.
And you just get to meet all these cool people. I attended one with a group that came up from Costa Rica and really hit it off with a guy he owned a custom software development company. I had just recently left my custom software development company. We connected on everything. And by the end of the night and a bunch of beers, he gave me free access to use this place in Costa Rica whenever I wanted. And so it’s like, what are you going to get those types of experiences in your day to day life when you’re just kind of bumping into people?
And so it’s obviously something that’s near and dear to my heart as I was able to quickly pontificate on it. But I think for anybody that’s running a company, I would just highly encourage you to check it out. It’s just nice to be surrounded and able to interact with just really cool people.
I think I was calling goodness greater policy cameras, last time from Sheets & Giggles. He’s in Colorado, and he was the first one that turned me on to the organization. And then, like I said, probably half dozen other people now. Since then, he brought it up. I’m like, I got to get closer to this. And I’ve actually looked at the organization. It’s good because there’s, like, a minimum as far as the range of folks who can get involved, it’s very targeted. It’s not like a hangers on Reddit group.
This is people who are active. You have to have a certain amount of active revenue. You’re really and truly aligned with a community of people that are doing something. And it’s it’s just so refreshing to me to see that because there’s community for technology, there’s community for so many things. But for founders, it’s a really difficult and lonely spot to be sometimes and have that peer group accessible without having to engage advisors and ultimately, like, everyone wants to give you ideas because they know they can get a hunkier company.
That’s ultimately what a lot of the people that. I want advice from people that are living the life not who just want a taste of mine.
Right. And that is exactly what it is. And I think you also hit on something that was kind of important to me, too. Is it’s not the hanger honors because I went to two or three of the other big kind of national global sort of groups kind of like this, and they just tend to be stuffed with consultants and people that kind of want to live in your orbit. Again, as I go back to my forum, everybody’s roughly in a range from a revenue standpoint, there’s just one guy that’s in the hundreds of millions from a revenue standpoint.
Everybody’s got similar sized organizations in terms of the number of people that they have. And because we’re all living it, we can all collaborate. So in my custom software development company, I crashed and burned with my partners and they bailed out of the company. I’d say at least half of the people that are in my forum, my group of about nine people. Well, probably half of them have had partnership issues since I’ve been in the group, and that’s a lot of experience that I can share.
One guy mentioned that’s in the hundreds of millions from a revenue standpoint, he’s able to give a tremendous amount of advice to us that aren’t at that stage yet that are still growing and building our companies because he’s done a lot of the things that we’ve done. We even have one guy in there that’s managing partner of one of the law firms, and he very kindly, you know, we’ll answer questions and give us some at least sort of direction of where to go from a legal standpoint and things like that.
And so it’s so helpful. And a lot of us will find, we’ll start forum and we’ll just kind of feel like heavy and it’s difficult. And by the time I’m done and we all go get dinner together after forum, I just feel like light and happy and just kind of rejuvenated again. So it’s just sort of good for my soul anyway, to just be around really interesting and exciting people doing cool things. Yeah.
Because like you said, when you go to meet ups and just like general, like event driven organizations, you tend to get a lot of people who are like they’re entrepreneurs. I’m not saying that one isn’t right or one is better or whatever. But you don’t want to be in a group where you’re surrounded by people who just run Shopify. So I know as a guy who runs some Shopify store, I got a successful coffee business, but I don’t have the same thing to bring to the group versus my experience and the advisory and real side.
So yeah, you can see the cut line where.
Plus those meetup groups, they are wonderful. They tend to be a lot more superficial. Might be the best way to put it. You don’t get really deep from a connection standpoint. You might share some ideas here about some cool companies. People come, people go. There’s a lot of transients to it. For our forum, we’ve got real strict requirements on attendance because we really believe that time together, sort of build bonds and build connections. In October, my forum and all of our spouses. We’re all flying to Napa Valley together. We rented a house together.
We’re lining up different wineries that we’re going to go to different restaurants. We’re going to go to in two weeks. We’re all going to meet up at a Lake out here in Colorado, and we’re going to bring our families and our kids. And so it’s a lot, I think, more consistent and much deeper ties than what you might see in some of those other organizations. Yeah.
And it’s finding the group of people who are aligned in a like, it’s tough to find those two things together. You can find a lot of alignment. But then if they’re so disparate in where they are company position wise, it sounds like such a great organization. I’ve heard nothing but really respectful words spoken and folks that are part of it. So I do recommend that. I guess in closing, sadly we lost couple of minutes in the middle because, for anybody that still watching on the YouTube, they’ll see that I’m on a phone instead of on my regular rig here.
Rob, I’d love to get your advice for folks that are getting started, and especially now, COVID and the state of the world means we’re going to be remote longer. It’s a great opportunity, I believe. Are there opportunities to be had? And so for folks that maybe were on the cusp, people that are already remote and thinking, maybe this is my time to start up my entrepreneur mindset. What advice do you have? Kind of today. It’s August of 21. What can the next three months be for somebody who wants to think big?
Yeah. So if you already have your business idea and you know what you want to do, then just get started. It’s the most critical thing. I just finished reading a book called Super Founders, and they talked about what was the number one key to people’s success. And the kind of read it too long didn’t read is past success, which sounds cheesy, but it actually makes sense. So people that have started companies are then more likely to be more successful and are more likely to build a billion dollar companies having done it in the past.
So I think it’s just like anything. You need experience and you need time. I think a lot of aspiring entrepreneurs, they try to make their first company a billion dollar company. So goal one is just our, goal two might be go easy on yourself. Don’t think you have to build the next Uber or next Microsoft with your first company. Think of it in terms of training for a marathon. And your billion dollar company is running to the marathon, right? You need to do things leading up to that.
The easiest place to start a new business is a service based company. There are so many opportunities in this country right now. It’s astounding I think of anything, it doesn’t have to be super exciting. I mean, it could literally be a landscaping company. It could be a house cleaning company. It could be a painting company. People out there are desperate for services. As a quick example, my wife and I are going to remodel our basement. We’re adding a bedroom and a bathroom when we initially got it quoted about 18 months ago to now, not only have prices gone up, about 220%, we had to bring out, like 15 contractors to just find one contractor that wanted to take the project on.
And so there’s huge opportunities out there for people to just start really good service based businesses. Not only I think is there sort of a lot of opportunity from a work standpoint. I think a lot of people out there think that it has to be this big, grandiose thing and it really does not. So start a service based company, get good at it, deliver great customer service. Build a business number one, potentially get yourself out of the rat race. You’re able to create a job for yourself.
You’re able to create income for yourself. Maybe you’re able to then have an exit and sell the business and you use that capital to start your billion dollar company. Or kind of more like I did. I got the service based company to a good place. And then I came up with the idea for my billion dollar product based company. I hired somebody to run my service based company for me. And then I went full time on the product based company. So you open up a tremendous amount of freedom for yourself.
If you just are owning a business and just running a business, just start. Go easy on yourself. Consider service first and focus on coming up with your billion dollar idea while you’re already working for yourself and making money.
That doesn’t inspire people to just sort of take a breath and think about what the possibilities are. I don’t know what is. So, Rob, thank you very much. It’s been a real pleasure. Thank you for writing me out during my technical troubles here today. If people did want to get connected online or elsewhere, what’s the best way they can do so?
Yeah. Feel free to just shoot me an email. It’s rob@valyant.ai or find us online or any of our social media sites.
That’s a beauty. Excellent. Rob. Thank you very much. Lots of great lessons. I’m bullish on the possibility for Valyant. I like what you’re doing. And as they say in the world, you bet on three things the three Ts, team, TAM, and technology. And the reason it starts with team is because you can tell when somebody has potential in something you don’t even need to know where something is, but you know somebody’s got the potential. I would bet on your team.
Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing
Rob Telson is VP of Worldwide Sales and Marketing at BrainChip and brings over 20 years of sales expertise in licensing intellectual property and selling EDA technology. Rob has had success developing sales and support organizations at small, midsize, and large companies.
We discuss the incredible capabilities of AI at the edge, the real definition of edge computing, and how that context creates interesting challenges for organizations and people. The conversation also covers a ton of great insights into building effective sales and marketing teams, empowering people, plus where to find the best burgers in the world.
Rob, thank you very much. I was first of all, I get excited by being able to see a name that shows up, but as a human, essentially, that is very interesting to chat with. And I’ve been lucky as I’ve gone over some of your your own content and you’re your producer of content, your you’ve got a really solid voice and a great way of of really leading a discussion, which is cool.
And on top of that. I looked at BrainChip and it was pretty darned excited about the potential of what’s there. So before I jump in and I start discussion over the good stuff, I did want to for folks that are new to you, Rob, if you want to give a quick introduction and we’ll then we’ll talk about yourself. We’ll talk about brain ship, the Akeda platform, which is really cool, the technology and kind of what’s what’s being done with it.
And we’ll kind of run from there, if that’s all right.
Yeah. Well, Eric, thank you very much for for first of all, for having me on the podcast. And likewise, I’ve had the opportunity to listen to some of your podcasts and you’ve had some pretty talented people on. And so I’m actually excited and honored to be here. And I feel like I have some some big shoes to fill in the discussion. Right. But just real quickly, for the listeners out there, brain chips, the semiconductor manufacturer, and we’ve developed an artificial intelligence processor.
And it’s you know, it’s architected to function similar to a brain. And it’s it’s based off of continually learning as we go on the device that we’re implemented in. And it’s it’s processes without any dependency on the cloud. And that will become a lot of our discussion today. And and last of all, it’s it’s architected in a way that it consumes very little power or energy. And by doing that, it gives you a lot of flexibility and functionality to do a lot of things in the world of compute a lot, that’s becoming very common to all of us.
But it’s going to evolve over time. But before I talk more about. Brain chip and our processor, which is called Akeda, and for those listening, a ketamine’s spike and we’ll get to that in more detail I really want to talk about. The problem that we see happening and why al Qaeda is going to play a major role in solving a lot of potential challenges in technology moving forward. So, so so what what I want to bring up is three words that in my world mean the same thing, and there’s a lot of technologies out there that will give you but ifs, ands, buts and so on.
But the reality is we’re talking about the same thing. So let’s think of Iot based devices. And let’s think of edge based devices and let’s think of end point devices, they all mean basically the same thing, and that is we’re really, really far away from the cloud. We’re doing a lot of compute and in most cases, especially as we evolve. That computer goes to the cloud and then it goes from the cloud back to the device. OK, and I’m going to reference edge based devices and I’m going to reference instead of Khanum, I processers deep learning accelerator’s GPU impious.
I’m going to use the word engine, the engine that’s processing this information. So if we look at the evolution of edX and we look at the evolution of these devices over the next five years, we’re looking at hundreds of billions of devices and these hundreds of billions of devices are going to be demanding easy access of information to the cloud and back. And to put that into perspective, we’re looking at I don’t know, I think it’s forecasted there’s going to be 90 zettabytes of data going from edge based devices to the cloud and back.
So we all need to take a step to the side, put ourselves in the driver’s seat of our car and realize we’ve just hit a traffic jam. And what that means to anyone who has a smartphone right now or has a personal assistant voice, personal assistant at home, like an Alexa or let’s say using Siri, an Apple based device, I don’t know about you, but there are times when I’ve said Siri directions to this address and Siri says, I cannot help you right now or play.
Fleetwood Mac can’t help you right now. OK, so what’s going on there is that the device is trying to communicate to the cloud but doesn’t have access to it. And we’ve all just been accustomed to dealing with that for those that have experienced that and you just wait a couple of seconds or whatever and you move on. But let’s fast forward let’s fast forward to this evolution of EDG based devices and let’s think about vehicles. Let’s think about unmanned vehicles.
Let’s think about flying vehicles. Let’s think about drones. Let’s move into medical device applications. Let’s move into industrial applications. We’ve got an issue. Four doctors in a remote location trying to save someone’s life needed access to the Internet and they don’t have bandwidth to the cloud. The device needs to be able to do some processing on the device, and that’s where we see the key to making some massive impacts. The other thing that Qaeda can do, which gets really excited, is has the ability to learn on the device, we call that one shot training or one shot learning.
So you’re adding new images. You’re adding new functionality to the device as you go. This is the growth and evolution of artificial intelligence. And we’re we’re able to do that because we are architected. In an advanced way, using what’s called the neuromorphic computing architecture and neuromorphic computing is architected to function much like a brain. And what I mean by that is for our listeners and for you, Eric, right now your brain is moving and it’s consuming energy and it’s listening to everything I’m saying that is important to you.
But being a fan of yours, I would hope you have a cup of diabolical coffee sitting by your side. Right. And you can smell that fresh coffee, but your brain is not spending a lot of time on that. And it’s not spending a lot of time recognizing that your hands are resting and you’re touching something and your feet are touching something and or the scene. Exactly. So so what what going back to what I said earlier, Akeda is is, is the it’s it’s also references spike.
We focus on spiking or events and that’s what makes us extremely unique and extremely advantageous as we move into this next generation of A.I. is that, look, we can function and focus on all these different things that are going on, but we’re consuming all of our energy on the event that’s taking place right now. Their traditional engines, as I referenced before, it could mean a lot of different things. Those traditional engines have to process and compute all of this information.
They are burning. They’re consuming so much energy and power and they’re moving faster and faster and faster to solve the same problem that we’re going to solve. And using microwaves to milliwatts and using a lot less energy and and gives us a lot more flexibility. So I know I said a lot there, but that’s giving you a little bit about brain chip.
But and I’d say that’s actually the perfect that’s the perfect segway to something that’s very important about this. The challenge that we’re trying to solve, right. Is that it’s you know, it’s one thing for me to do yell into my phone like Siri, find the closest Tim Hortons or whatever. Sorry, I guess I should say Dunkin Donuts to finally close Dunkin Donuts. And and yeah, I’m sorry, I can’t help you right now. And like, darn it, Siri, like, that’s fine, I, I will I’ll open Google and I’ll say it’s got a slow connection and I can get through it.
It’s very different than system to system, which is where like you talk about, you know, rolling devices that are using Lydda and ultimately need to communicate and they need to do a lot of processing locally in because they have to have certain amounts of asynchronous communications and they cannot rely on every transaction being decision driving. So they have to be able to make decisions locally. So we do all this and it sounds fantastic. Irresistable, it’s easy. Why don’t you just move processing close to the edge, closer to the workload, whatever.
Great. Sounds great, right? It sounds as great as the numbers that we hear when we think about the scale that we’re we’re tackling. But then the first thing you think is how do I drive this like little of the power to do the stuff in the datacenter today or in the cloud, a fantastic amount of power required to run GPU and TP use and all these things when you move the edge. Now, this is the thing like you’re talking about milliwatt, representation of power usage, fundamental shift in the ability of technology to act in the way that what a kid is doing yet.
Not burn the planet down like Bitcoin miners doing this crazy bocking processing.
Yeah, it’s it’s it’s amazing. I’m going to bring up an example again of of a scenario. And I don’t want to get overdramatic, but I think we always have to visualize these types of scenarios. There are closer to us than we than we realize. But, you know, the proliferation of electric vehicles to proliferate, proliferation of vehicles and the amount of compute that’s taking place in the vehicles of today and tomorrow, the cars designed to do a lot of of great things.
And and I’m a geek, so I get really excited about functionality. Useless functionality usually gets me to buy something. So what happens, though, is that the if we don’t have the ability to have some of this compute in decision making on the device, and we’re going to find a scenario where you’re driving a car and that car sees a human and it knows what it needs to do is swerve hard. Right. And get out of the way.
Now, that hard right to get out of the way could actually end up causing another impact. You know, and. If it doesn’t if it’s dependent upon communicating with an external device, whether it be the cloud or something else, it’s going to make that hard, right? Whether it communicates or not, there is a human there and then you find out it’s a plastic bag. We didn’t need to make the hard right. So we need to advance and that’s what we’re looking at.
And the second thing that you what you brought up there in my head was spinning a mile a minute ago. OK, how do I want to respond here? But you start to take a look at devices that do consume a lot of energy. And one device that everybody has in their home is the refrigerator. It consumes a lot of energy and we we look at that as these consumable, these consumer white goods appliances in which al-Qaeda is going to help reduce the amount of energy that’s being consumed.
We look at it and we say, in the future, you’re going to want to have the ability to recognize by odor whether your food is is stale or it has a lifespan. How do we how do we do that and, you know, has that capability. One thing I didn’t mention that I’ll I’ll dovetail on is that we focus on five sensor modalities and two are very common in the world of processing today. And that’s image or object detection, very similar to the plastic bag and what’s out there and what’s going on.
And and the other one is, is voice or keywords. And again, something very common and a lot of the functionality that we have today. But there’s three other modalities that will become impactful in our everyday lives. And one is the ability to smell or olfactory. One is the ability to touch or vibration and which is some somatosensory. And then the other one is taste gustatory. And between all of those five sensory modalities, that’s where brain chip is kind of hung their hat and said, OK, well, if we can process vibration.
Then we can help in industrial applications and make things more efficient, we can save the infrastructure of the roads and bridges and we can actually help with prosthesis. And so people that that have prosthetics can now feel by having vibration detect somewhere else in their body. And we’ve now seen with the olfactory side of things and they always say it’s you hit these these points, inflection points by random. You design it all you want, but these random events happen.
You have what’s called Vox’s or volatile organic compounds, which are breath markers. And by these Vox’s you can detect different diseases. Illnesses, viruses and even different types of cancers. So with the right systems and the right engine, you can do a lot of good. You can start to make a lot of impact in areas that that we only dreamed of five years ago.
And I think that’s the important thing, too, when we think of any of any solution and what I love is enabling solutions in what you’re talking about, what what you and the team are tackling by bringing Akeda to the market is that. You know, the old was this show Halt and Catch Fire? My favorite interviews, watch that one is a famous sort of early episode. He says, you know, this is computers aren’t the thing. They’re the thing that gets you to the thing.
And so we look at what this does. It’s an incredible enabling function now is that, yeah, we can move to enabling prosthetics to be, you know, touch detecting. And it’s like it’s. This is the stuff like the reason why we went to the moon. And people say like, oh, well, there’s no one living on the moon, why did we go to the moon? And, you know, when we said, well, I’m not sure if you’re familiar, but most of the things in your house are were engineered and discovered as a result of research that we did in that we know why we call it a moon shot.
Yeah. And so now you have by giving this capability to the research community, to the industrial community. So as we go from Industry 4.0, which I didn’t realize we were already there to fight or whatever it is like, this is the huge, huge leaps that can be done by those communities and those creators and those researchers that are going to tackle, they said, big problems. This is this is like what quantum is a concept. You know, right, and it was it’s a very it’s so far out of the realm of most people’s mathematical understanding.
That it becomes like, oh, yeah, one day, but it’s. When you show this and you bring this and you show people what can be done now with this capability. This is a lot closer to the use cases that people understand. I think in then those sort of massive concepts like Elon get landing us on Mars and say, yeah, you know. It’s been. Tremendously satisfying on a daily basis, we’re constantly on calls with current customers and future customers and dreamers and visionaries and and.
I try to I’m going to take this conversation twofold, number one is I think it’s really, really important for people to understand in the world of artificial intelligence, it’s very complex. And a lot of people are addressing these challenges and problems and solving them a variety of different ways. So what I’m trying to do and is communicate that there’s an ecosystem out there and within that ecosystem you need. To have sensors that can detect what’s going on, you need to have a data set.
In order to understand what you’re trying to detect and you need to have an engine that can take all that information and actually accurately predict and accurately provide feedback. That you can depend on. So, again, we are the engine, but one of the things I brought up before when I talked about bosses and it’s something that we’ve had some success with, is is, for example, working with a couple of partners, but one in specific that it’s over in Israel, that I was able to develop some type of a breathalyzer that you can breathe into and it can capture information.
And the goal was to recognize different types of viruses or disease. And it just so happened that as they were building this product covered it up. So now let’s apply it to covid and we need an engine. If we get data sets, we get all this information, what engine is low power and operate quickly and provide us the accuracy. Right. And so we were hitting ninety three ninety four percent accuracy on these data sets to determine covid positive cova negative.
And that’s one example what we’re doing.
It’s amazing to think and like you talked about one shot and for anybody that wants to, you know, pause the show and go like understand what it means to be able to do zero shot, one shot and what like the impact in the way this does versus the like, mass data set, continuous learning it. So it’s a big, big change. And now to be able to bring one shot learning close to where it needs to be processed, because the other problem we’ve got is the data sets in order to do continuous training or potentially massive.
And while we’ve got five G in areas like oh five is going to here, that’s going to save the world. Well, no, you know what’s it’s going to happen with five G Instagram. That’s what’s going to happen. YouTube, Hulu. Like, that’s the the reason why 5G is going to be full by the time it gets here fully is because we are always pacing well ahead of the capabilities with the content. And now if we look at we’re effectively going right alongside your latest, you know, Hulu release their Game of Thrones or whatever craziness the people are streaming.
And you’re trying to do like legitimate real time neural processing. This is why being able to bring that processing and do it there, it’s it’s it’s a significant need. Yeah, it’s a huge boost in it, like I said, because right now the devices aren’t capable to do it without a significant amount of power or like these large processing units deemed to be local. So doing this at at a small scale with low power is game changing, to pardon the overused phrase.
Yeah, it’s exciting. And we’re we’re at a point where we have customers and, you know, our business models made up of the fact that we sell silicon and we’re in volume production right now and we also license our our IP. So they’re getting an encrypted IP that they can design into their system on a chip, and that functionality is on that device moving forward. And I think at a time like this, it’s this is this is huge. I mean, I have one of my colleagues says, Rob, you’re on this Jerry Maguire moment where I’m saying we have this semiconductor challenges, manufacturing and shortages.
And and there are a lot of great technologies in the sandbox, all trying to solve specific problems and broad problems. There are only a few of us that are licensing. The technology is IP. So for the others, they’re all scrambling to get wafer capacity and to get in line. And my Jerry Maguire moment is just made up of the fact that how great is it that some of these companies can incorporate our IP into their system on a chip, rely less on the supply chains and the challenges that will exist for a period of time and be able to to make a move as a as a market mover and have products with a lot of functionality on it.
So we we see that. And I’ll stop talking about my Jerry Maguire moment. But I do I do like to emphasize we have we have two two business models that we’re we’re leveraging and we’re seeing success in both of them. And the other thing is, is I look at it and we have two two paths. Right. One is, as you mentioned, there’s a lot of really exciting AI applications and some of them are more scientific than commercial focused.
And we have a unique crossroads where you want to address a lot of these scientific solutions because they’re really cool and, you know, you’re going to do something really good and beneficial. And we use that word beneficially. And I think that’s important. But also we have a business to run and and we are in the process of making sure that we have a commercialized product that all of our customers and future customers will have a lot of success with.
And I got to call something which is so thank you for using the best phrase on Earth, because, you know, the way you describe things here, you can tell that you’re very people focused even in understanding the technology. We talked to the like. Where we can go with how deep we can talk about attack, and first of all, you go far deeper than you would even give yourself credit to be able to. I’m sure we could go further, but.
You call it future customers, that may seem like a throwaway to some people, I worked with a lot of sales organizations and every time I hear the word prospect, I cringe a little. And it’s a difference in the way that you think about it, but by the way, you it’s not even a thing you fight to make sure you don’t say it’s natural. Like you just said, you know, we’re people with work today, future customers.
It’s a it’s I’ve got a huge respect. And so, you know, there’s. Thank you. Something something in you. You’re definitely your solution. And people focused. What what brought you to that? We’re going to dig into the rub portion of the show right now, because I want to unpack this because it’s actually very rare for a senior sales leader to not kind of. This slip up and say we were talking to a couple of prospects today and it seems like small thing and I have a massive respect, first of all, I should say this to I got to catch this with look, I’m not I’ve never, as they say, carried a bag for for an organization.
I’ve got a huge respect. Everyone’s in sales to some degree. I’ve been lucky that I don’t carry a quota for it. So I, I, I don’t live without my sales teams doing what they do. And I furnish them with the tools to do what they do. Right. But but you split this line where you have this and this. There’s something different. I figure out where that comes from.
All right, let’s dig into it. What do you want to know?
So what’s your what was your background? As I noticed that through your career, you’ve actually been pretty consistently in this account, executive sales track like you’ve you’ve always sort of come in there. So where did your what drew you to that? And then kind of what drew you to the way in which you approach it?
Well, you know. I’m going to go back far. You know, it sounds like it’s a short path, but when I just had my my birthday yesterday, it’s been a while. Well, thank you. So I went to school to become a lawyer and I came home. From my final interview to go to law school and I sat down with my parents and I explained to them. That, OK, I start in three months and dad, you’re going to owe X amount of dollars when all is said and done for me to go to law school and as a very matter of fact statement and my father looked at me and he said, oh, hold on a second.
And he said, I never said I’d pay for law school. You’re on your own. My recommendation is you go get a job. They’ll do something and then figure out whether you want to go. So the next day I went out, started interviewing for jobs and and I ended up getting a job. Selling, mailing and shipping systems and learning about warehouses, learning about packaging devices, and and just walking door to door in a suit and tie and talking to guys and warehouses and selling products, the only thing I realized was I was really good at it and I was living at home and I was making a lot of money.
And I’m like, wow, this is who needs law school, right? And and about that time, we were transitioning to two Windows based operating systems. And I had the one thing I really look back at when parents make investments in their children. My father was really into buying computers before. Computers were something that everyone could get their hands on. And he had chosen Apple. So we had Max, so I had learned how to work on a Mac OS, so by the time Windows was introduced, you know, it had a lot of the same feel.
So the company I was working for had introduced a Windows based operating shipping and mailing system and no one knew how to use it. No one knew how to operate, and I sat down and started tinkering with it, and before I knew it, I was part of a Fortune 500 company sitting with the CEO of the company, some in Puerto Rico, teaching him how to use the system. And and my my sales career kind of took off and immediately took me into the most random decision I’ve ever met in my life, which was to move from this company into Edde Electronic Design automation.
And I was I was I got a job with a very large Iida firm that brought two guys in that had no engineering background and said, one of you as a sales guy will learn how to navigate all the way to the upper echelons of senior executives. And one of you won’t. And within six months, most likely, one of you won’t be here. That wasn’t me, I was there and was able to have a lot of success, and then that led to start ups.
That’s when things in technology were starting to take off. The the Internet was becoming something of a thing. And this was back in the day when, you know, you went to startups and you tried to make some money. And I was very I was very fortunate. I use the the phrase I’d rather be lucky than good. I was really lucky to be at the right place at the right time, which led to intellectual property. And again, another startup then, which led to a corporate career of sales organizations evolving into large sales organizations, evolving into sales leadership and being a part of a phenomenal run with a great group of people at a company called ARM and doing that for a long time and running their sales for the Americas, as well as running their sales for manufacturing, which most of it was done over in Asia and then transitioned to the second phase.
I called the second phase of my life, which is I want to do another startup. I’ve done this before. I know what we’re doing and I know what I’m looking for. And it just so happened that an unmanned car was a customer customer of mine. And we’ve done business together. And that car is one of the co-founders of Branch It. And we were having a discussion and he said, Robin, be great. If you could join our company, we need to build a sales marketing organization.
And and I think you’d be phenomenal doing it, which led to. Yes, sure, why not? And here we are months later, and we’re starting to make an impact with things that we’re we’ve we’ve put in place all seem to be executing. So this this to me is a very, very exciting time for brain chip. And to be a part of this this cycle, this revolution is really cool. So that might be a little long, but I just went through.
But Hurford says, you know, it’s a beautiful story because it tells you. You know, when when we choose a place to be. One of the most fundamental parts of the why is because we go there to there’s people that we care enough about to give ourselves to their their life via the startup, like the amount of time we spend supporting each other and support in the organization and, you know, the sacrifice of families. It’s it’s it’s no insignificant, you know, step to move to the startup landscape.
And especially, you know, you went. What was effectively the Oggi startups, right? This is, you know, startups now, you know, I’ve got I’ve got four startups that are running right now. I mean, there are startups. I’ve got a coffee business. I’ve got an affiliate website. I’ve got a podcast. I work at a startup which is not really a startup because it’s a 12 year old company that was just acquired by IBM, you know.
OK, so but you there was no other option like this was there was Fairchild Semiconductor. And it was like the crazy hero story of of how Silicon Valley truly became Silicon Valley. Right. But the story is that we we don’t here are how many did try and get started. And they’re very dominantly in the manufacturing and physical stuff. There was no eight of us. There was no like you could just whip startup at a Starbucks. It’s like so the leap of faith was even more faithful.
In the first one that you did it. And then now so. Are you, as they call it, unemployable, right? You can’t go back to the big company again. It’s very it’s always a very interesting thing is once you hit a certain phase of growth of an organization quite often. You look back and you say, the biggest impact I feel like I’ve had was, as it turns out, growing this company from one million to 10 million or whatever.
And I find a lot of. Folks, they really are like kind of like a SWAT team for a growth phase. You know, it’s but you’ve spanned every phase through different ways in which you’ve you’ve been in the industry. So I’m curious if you’ve got a. Kind of a favorite assize or favorite thing about building the team and building the organization.
It’s funny, I’m reflecting on and I was writing an email earlier today to my current team. We have some really young guys, very talented individuals that that have a lot of development to do. But I have some pretty senior guys that have actually worked for me for a while in different environments. And I was reflecting on a comment by one of the young guys who was was just. Talking about the camaraderie of the team that we’ve built and and how we’re there for each other and and the fact that he was highlighting this this level of comfort and excitement every day, it’s almost like a family.
And I think what I’ve been able to do. I don’t know why there’s no there’s no magic recipe, but it’s it’s a team, it’s it’s an organization and it’s to get everybody in trusting each other and being there and then being able to pick up the slack when we need to pick up the slack, being able to to realize what we’ve just done is not good enough. We have to do better and have that mindset to win. And so that’s carried on with me from very young organizations, with a few people to very large organizations.
And I think that’s part of the fun. And so the other thing, what I at this point in my career, what I really, really enjoy is having the frank and candid conversations almost like a mentor. And there’s a fine line when you’re managing people and working with them and mentoring them about what could be some personal dynamics and so on. But when I look back at people that I’ve worked for me and worked with me and I’d rather say worked with me than for me, we still keep in touch.
We communicate a lot. And there’s a lot of frank dialog about what would you do in this situation or and it doesn’t have to be business related. It can be life as well. And to me, that’s very fulfilling, more like a friendship. And I think that’s when you look back at life, it’s it’s one of the things you have to look at. How are you measured? What did you do? And I I had the opportunity to to do an executive management program called the Program for Leadership Development through Harvard Business School.
And one of the most dynamic individuals of his time was Clayton Christensen. Oh, yeah.
Who lost? Yeah, no, actually, I guess, yeah.
And I had the opportunity to to spend a lot of time with him. For those who don’t know, Clayton Christensen is the author of Innovator’s Dilemma. But then he had a follow on book after he got the he had a stroke and it was called Your Life and How Will You Be Measured? And so as we were all evolving young and aggressive and wanting to capture the world, he was on the other side saying, does it really matter? I mean, what about your family?
What about how what Mark are you leaving? And so when you ask the question, all right, what do you prefer? You know, I know we walk into an environment and determine understand the environment, large or small, and that’s the team. Let’s go. Let’s go be effective. And one of the things that I’m really steadfast on right now is that I have my children that are I have teenagers and I have I know when they’re not teenagers in their 20s and but they’re working and they’re grinding and they’re learning.
And one of the things I think is important that they see some of that same work ethic from me to an extent. I also want to spend a lot of time with them. But I think that they they start to realize, like this podcast, my daughter and her friends will listen to this podcast. So I just referenced my daughter. So, you know, so so from that. And I think that’s that’s what I’m when I look at make my mark, I want to make my mark and and for my friends and my family and for them to to see that this is this is life.
This is what you do. You get up every day and you give it the best that you got.
It’s I often say the I will my greatest accomplishment in life will not be what I achieved, but what I helped someone else achieve.
Yeah, exactly.
And it’s it sounds whenever I say it or remember, you know, write down some of these things in the way that I do a mentoring program and I’ve helped, you know, I’ve been mentored as well and continue. It’s a continuous process. Right. And it’s funny, I started thinking, like, oh, if I start writing this stuff down, it’s really cool to be able to share. But then I’m going to end up being like, you know, what do they call it?
The fortune cookie Twitter voices who just go on there and like they write threads about, you know, I’ve learned five. I’ve talked to five hundred CEOs and this is what I’ve learned thread, you know.
Right. Right, right.
But the amazing thing is we do now have the ability to have a much greater and continuous and direct impact on people because of the way we can communicate differently than like you, you know, by the rarity of the opportunity in your choice to take it on. Got to meet one of the most fantastic sort of leaders and creators. You know, Clayton Christensen is is widely read and widely respected. And, you know, I have a friend of mine and we I was joking about something.
I said. I said I’m like I’m like the guy from a beautiful mind, but a technical marketing. And he’s just like I says, I went I went to school and John Nash was one of my professors. I’m like, oh, wow. You know, it’s. The opportunity to be there is different, but the other problem with abundance is also the ability to find focus. And right now, you know, it’s like when you look at your wall of books and you see, like, I’ve got two hundred, you know, books and effectively they’re all the same, the same middle chapters and just different intro and outro for a lot of business books or whatever.
But I still read them all and I still like. But most people, you know, it they look, they’ve got the Internet, they’ve got YouTube, they got Masterclass. They’ve got this. They’ve got school. We have so much available, but then we don’t leverage it. Like, we don’t listen, like, you know, I, I, I read everything that my dad ever wrote, right. And like, I sat with him as a kid building system, that’s how I got into this stuff, is because, like, we he was learning at night.
And so. Yeah, like, well, that’s going to learn. I’m going to sit beside him, you know, and as long as it doesn’t bother him, you know. And then what happened, I was I was 12 years old, learning DBAs, you know, and then I got a job at 15 years old. Right. Doing data entry because it was on a on a DBAs three system. And I was the only 15 year old kid in anywhere.
I lived in a small like farm town. And I was the only person other than my dad in that town, that new DBAs. And so it was just so funny that that’s how I got the job. And then I got into, like, system development and, you know, diverged, you know, later on for years. But, you know, and then came back to technology. But again, it’s like the reason I say this is.
Just hearing that, like your your kids will take the time to listen to what you said, not from your mouth, but like they’ll go back and like, hey, dad was on a podcast. Let’s check it out. Yeah, exactly. That’s an impact. That’s a profound impact.
It’s funny. One of the things I was excited about on this podcast was the fact that my youngest son, as we hit the pandemic, you know, naturally was trying to keep himself busy. And he really you’re going to laugh in a second. He really got into coffee. OK, I’m going with this. And so, you know, as a dad, I’m like, all right, let’s get into coffee. I wasn’t even drinking coffee at the time.
And we got we got a coffee machine and we got up. And then and then from the coffee machine, we decided to up the ante a little bit and we got a grinder and we were buying beans from little boutique shops. Right. And we were tasting a little bit. And then I got into coffee. Now I’m into coffee and I got a machine that grinds and it brews and I froth my milk. And and then we were buying beans from four or five different outlets all over the country and shipping it.
And again, this was how we were entertaining ourselves. And he became the barista of the family. And he wasn’t drinking it, but he was making it. And I would say, I’ll take a mocha. And my wife would say, I’ll take a vanilla latte. And then my other son said, I’ll take Americano. And all of a sudden the barista was just in action and we’d have family over and everyone say, make us coffee. And so fast forward, he’s not in a coffee anymore.
He’s done with coffee, doesn’t want anything to do with it. And I’m stuck in my coffee machine.
And you’ve lost your barista no more. I’ve got a double half caf Rob yelling across the kitchen, so I’m making my own coffee and, you know, but he’s still over my shoulder every once in a while, giving me a little bit of advice and and pulling up stuff on YouTube. So I will give your coffee a plug. I am going to order it and put it up against some of the other guys that I’ve I’ve I’ve started to build some relationships with.
And we’ll see how it goes. And that leads back to Akito, a smart coffee makers and and what we’ll see in the home. And and I do want to make sure that everyone listening, if you have a free moment, go to the go to our YouTube channel at you, go to YouTube and just type in Brain Chip Inc, and we’ll have the links as well. Yeah, but remember to put them in the show notes as well for folks.
We have these these videos that we’ve put together that really show you where this is all going on. Smart Home and some of the devices and applications we will all find customary in our homes over the future. And I’m going to reel it back now that I said that. And I’ll tell you a funny story. If you ever bump into anyone that knows me. That really knows me, and you say that Rob tells them they will ask you, have you had a burger with Rob?
Nice. So I just want you to realize that there’s a potential that could happen.
Burger afficionado. So I’m going to take you up on that.
Rob, you drop my California dude on you is I’m a burger guy, too, and I’ve had a lot of burgers. So in back in the day, it was a very intense negotiation in a in the northwest Pacific Northwest. And we had a client or customer, that future customer at the time, but then became a customer that wasn’t being very receptive to the negotiation. And as we were sitting there having this discussion and I had a team of about six people who had a team of about six people, we’re four hours into this discussion and we’re getting nowhere.
And we feel all these people up for this meeting. I mean, this is supposed to be the deal. And I at the time kind of made up, hey, look, I’m really into burgers and I we’re in a town. We’ve never been here. I’ve got my team. Why don’t we take a two hour break? When I take my team and give me a burger recommendation, we’ll go get some burgers and we’ll meet back. And the guy on the other side said, You like burgers?
I love burgers. Matter of fact, I’m writing a book on burgers. And he said, well, if that’s the case, I want to take you to my favorite burger place. And we left everyone there in the room and we went off and and we had burgers and we had some French fries and a couple of sodas. And now and a half later, we had closed the deal. Right. And so that stuck with me everywhere I went.
And everyone that that has bumped into me and my professional career has kind of stuck to this. Oh, Rob, I got to take you to this burger place. Oh, Rob. And that has led to burgers in Taiwan, burgers in China, burgers in Korea, burgers in Europe. And I learned over time to to very graciously say, when I’m in a foreign country, I will eat the food of choice, not burgers, burgers or are here.
So I ended up writing a book for a close friend of mine on burgers because this person had experienced a lot of burgers with me as well as my wife. My wife has experienced almost every burger with me, but the first book I wrote I dedicated to my my buddy and my wife was like, Are you kidding me? You didn’t dedicate this book. I missed on that one. And it was for his fiftieth birthday. But yeah, I can I can I can hang when it comes to burgers and talk about burgers and anyone out there interested in giving me some advice on burgers, I’ll take it and potentially experience it.
When we think of like I said, this is why when when I listen to your podcasts and the content that you create and inevitably the discussions that you have with folks. You know, this is why I latched on to, you know, what’s going on with brain chip and the potential there, and that’s it’s it’s incredible to see you’re you know, you’ve got the you’ve got a choice to, like, go to a team and a team that it’s a it’s a beautiful thing because teams, especially in startups, you know, at different phases of organizations, they’re very dynamic.
And what you end up having to make sure that you do is you find this kind of like Top End’s, like that’s that vision, you know, what’s the reason why we’re here? And that’s the thing you say. I mean, Donald Miller sort of fancy is what you call story brand. And that’s it’s so it’s so easy when you when you get used to it and you start to hear it come out. But you realize that that’s it’s much more than that thing because I’ve gone into many companies and I see the you know, the thing behind the administrators desk at the front and I see the stuff printed on the walls.
And and you realize that the only person that took that in was the person that painted it. You know, it it’s not reflective of the culture. The culture is how you will deal with it. If the walls were white, you know, the culture is how they’d be if you’re not looking in. So, you know, knowing why you’re doing what you do, what’s your vision, your guiding principle, and then taking this and then being able to have sort of fantastic technology that’s in an exciting area of opportunity.
But it’s also probably a fun challenge because it’s not massively a lot like the type of solutions that you’re able to enable are still being developed. They’re still in a lot of research areas. So it’s it’s it’d be neat. In 10 years, you’re going to be look back and say like we did that. Yeah. It’s it’s cool to to see that. But it’s also this it’s a very interesting. Dynamic of being in organizations where you’re selling something where, you know, like they’re going to get this one day and then back and to be like, oh yeah, when he explained it to me now totally makes sense.
It’s pretty impactful. And the amazing thing about Akeda, it is wide and deep in regards to applications and and problems we’re going to solve. And it’s it’s you know, this is a unique we can only say so much, but it’s it’s an exciting time. And as you said, in these types of technologies, when you look out in the future, when you look out of where you’re going and how you’re going to get there and the impact that you’re going to make, my gut tells me that brain chips can be impactful and we’ve got a lot of work ahead of us.
But but it’s it’s a lot of fun. And we’ve got it’s going to be a good ride. Looking forward to it.
And especially to to go to a first principles approach in developing a solution is is amazing because we look at any competitive landscape. And it’s quite often the first thing that most people will do is just turn around. Yeah. And sort of head for head for the door and look for another object, because it’s it’s a big challenge. But, you know, that’s what it is. You choose you choose first principles to attack a problem in a specific way, do so provably, and then find people that can take that and give it an application, give it a place in which it can be leveraged and then ultimately see the as that comes to fruition.
So, yeah, it’s exciting. What’s the what was the most exciting surprise you had after, you know, you were like, OK, I’m at brain chip. I like the idea. And when did it suddenly go? You’re like, oh, hang on a second. What was that? That thing that that really kicked in for you?
Yeah. One of the things we haven’t really talked about is the learning on the device. And for those that are new to artificial intelligence and I brought it up a little earlier on on this podcast, is the fact that the. Training a device, teaching it what to look for and then process that information is not trivial and is extremely complex. It requires data scientists, research scientists, and it requires a lot of time, six months, a year and so on.
So having the ability to have a device where we can add a feature to it added my my face, your face, and then use that for facial detection or or for image detection or in a vehicle or something. To that extent on the fly is huge. And so I think it was day to day three, one of the engineers said, do you want me to do a demo of Akeda for you? I said, Sure. I sat down and started putting some stuff in front of and it was recognizing everything really quickly.
Very low power in Yemen. And then I, I said, well, what if I want to take a baseball and put it in front of al-Qaeda? And it was using a data set that had no idea what baseball was. And he said, sure, no problem. Put it up there. You know, you trained it on the device and all of a sudden baseball is now part of the whole data set. And I’m taking the baseball.
I’m throwing it through the the image. It’s it’s picking it up as it moves through. I’m like, wow, that’s really responsive. And fast forward again. And we just did a presentation about a week ago where we were teaching a kid beer bottles on the floor, had different beers, different brands, and then pulling them in and out of the screen and showing the viewers that, hey, look, it’s recognizing it really quickly. But we decided to show Akeda two beers from the same brand that the label and image was so similar to each other that we confuse other devices.
And we were able to recognize it. We were able to to to to process it quickly. And the whole on training thing is mind blowing and it will become impactful. For all these from from industrial applications through automotive through consumer applications in the future. So that’s the one thing that I get really excited about.
Yeah, that’s it, like on learning is yes, is it? This is where amazing things now are possible because, you know, I’m I literally just I’m overly excited by it because it’s just this the potential is incredible. And this is where I am. I’ve been lucky recently. We’ve had a lot more folks. We’ve sort of divx been diving in a bit more on machine learning and some of the capabilities. And then you realize like it itself is a fairly small community relative to, you know, it’s very, very much in the research phase, so much so compared to the broad computing frameworks, you know, so when you start to show them these capabilities and you hear about how they’re trying to solve the problem, they’re solving it in papers.
You know, it’s still stuff that’s being submitted as research phase. And yet. They can take that blackboard or whiteboard theory now and they can literally put it into place now because like you said, al Qaeda has this capability, like, OK, cool, this is no longer just what is the theory of of driving cars, going to self-driving cars going to be like. All right, let’s show you the other thing is that that and this has happened the other day and my guys were really excited.
They called me into a room like Rob, you got to take a look at this. I highlighted earlier not being dependent upon the cloud to do your processing. And there’s a company out there that that has a phone that most of us have or a lot of people have that has a software update coming out. And they talked about the fact that if you put your phone on airplane mode, you now can run their assistant on device without being dependent on the cloud, providing you more privacy and security.
And and my guys are jumping up and down when we do that, where we do it in hardware. I mean, we’re on to something here. And like we are, we are. And that’s the other thing that you’ll see. There’s a lot of applications and software. And for those that are listening, there’s a lot of great functionality in software, but it’s in software. And what that means is it consumes a lot of energy. It consumes a lot of power because our IP battery, because, you know, the if you’re solving it in software early, I’ve got bad news for what the power consumption levels are.
Exactly. So when we talk about edge devices and we talk about all the stuff that that’s that’s dependent on being independent and functioning without consuming a lot of power, software is a problem. So at the end of the day, so we look at this and we look at what these leading this leading technology company is doing. And we say to ourselves, we’re doing the right things. We are doing the right things. We just we’re just going to continue to march down our path and execute.
And and it’s like I said before, it’s the tip of the iceberg.
And it is and it’s it’s amazing what we can do and the potential is fantastic and yeah, first of all, thank you, Rob. This has been really fun for folks that wanted to reach you to chat further and learn more about brain. Like I said, we’ll have links to the site. We’ll put the link to the YouTube and thank you. And what’s the best way your folks want to get in contact?
Yeah. So if you go to w w w dot brain chip dotcom, there is a link for information. You can fill that out and it will go directly to our sales team, most likely. I’ll get a look at it as well. You can reference that you were part of Disco Posse and I will personally respond to you and start dialog. You can also go through the sales link at Freindship Dotcom. You can contact us through LinkedIn, you can contact us through Twitter at Brain Chip, underscore each.
But for those who are curious, I really, really want to emphasize we’re using YouTube as our platform for education and understanding of the company who we are. So subscribe to our YouTube channel, our Friendship Inc. And that way you’ll be updated every single time we have a new video, a new presentation. And and when this is is uploaded, I’m sure this will be on our our YouTube channel as well. But that’s those are the best ways to reach us.
And we’d love to talk to you and educate you more about brain chip and what we’re doing. But, Eric, thank you very much for having me. And this conversation was great. Thank you.
Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing
Chris Wexler is one of the Founders and CEO of Krunam, the best in class image and video classifier of Child Sexual Abuse Materials (CSAM).
Krunam is in the business of removing digital toxic waste from the internet using AI to identify CSAM and other indicative content to improve and speed content moderation. Krunam’s technology is already in use by law enforcement and is now moving into the private sector.
We explore the seemingly intractable problem of CSAM, how Chris and the team at Krunam are working to solve it, plus the incredible story behind the name of the company. This chat covers everything from the technology and the ethics of the challenge. Thank you Chris!
Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing
Slater Victoroff is the Founder and CTO of Indico, an enterprise AI solution for unstructured content that emphasizes document understanding. He’s been building machine learning solutions for startups, governments, and Fortune 100 companies for the past seven years and is a frequent speaker at AI conferences.
What is very interesting is that Indico’s framework requires 1000x less data than traditional machine learning techniques, and they regularly beat the likes of AWS, Google, Microsoft, and IBM in head-to-head bake-offs.
Slater and I discuss AI, AGI, how to relate these topics to newcomers, how Machine Learning and ethics come together, and also how MMA relates to how he tackles startups and team building.
This really is like a lesson in AI and Machine Learning and really taps into the subject for both newcomers and veterans of the field.
Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing
Dan Burcaw has founded companies on the forefront of profound technology waves: open source software, the smartphone, and cloud computing. He describes himself first as a serial entrepreneur; a repeat startup founder and CEO with his latest company being Nami ML.
We explore a deep discussion around how leveraging services and systems to let your teams do what matters is both powerful in business and life. We also talk about how Dan has created and operated his companies and some great personal insights into being a leader.
Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Luis Ceze is a computer architect and co-founder and CEO of OctoML. I do research in the intersection of computer systems architecture, programming languages, machine learning and biology.
OctoML is doing some very cool things about demecratizing ML and transforming how ML models are optimized and made secure for deployment. Luis shares a lot of great info on the foundations of ML, ethics of data, and how he builds a team.
Oh, yeah. Welcome, everybody, to the Disco Posse podcast. My name is Eric Wright and be your host. And this is a really fun episode if you’re digging machine learning then look no further.
You’re in for a great conversation. Before we get started, though, I want to make sure I give a huge shout out to all the great supporters and fans and friends of the show. This episode is brought to you by our favorite and good friends over at Veeam software.
This is everything you need for your data protection needs. I trust this company with my data, my identity. My goodness, whether it’s on the cloud, whether it’s on premises, whether it’s in using cloud native and the new stuff they’re doing with their recent purchase of a company called Kastin and Integration. Really cool stuff.
Whether you want to automate and orchestrate the entire kit from end to end for full business continuity and disaster recovery with Veeam availability orchestrator, you name it, Vimes got all sorts of goodness for you. If you want to go check it out and you can easily go to vee.am/discoposse and also let us know that you came from ol’ DiscoPosse’s podcast.
It’s kind of cool, but the Veeam family, it’s hard to say the Veeam family are extremely cool in that they’ve been great supporters. I love the platform. I love the team. And in fact, like if you actually go back in our archives, you can hear Danny Allen, who’s a fellow Canadian fellow cyclist and also a really fantastic human who’s the CTO over at Veeam. I was really lucky to have Danny on, but at any rate, go check it out.
Please do.
I definitely believe in their platform, in their product go to vee.am/discoposse
This is also brought to you by the four step guide to delivering extraordinary software demos that win deals.
This is something that I decided to build myself because what I found is that I’ve continuously involved in sort of sales processes and in listening to folks that are struggling with being able to connect with people, whether it’s in product marketing, product management, sales, technical sales. So what I did was I took all the lessons that I’ve captured myself and from my peers and compress them into a very easy to consume concise book. It’s called The Four Step Guide to delivering extraordinary software demos.
The seats you how to demo, how to listen, how to connect, how to engage, and ultimately how to get to Problem-Solving in the way you show your platforms supercool.
Plus there’s an audio book, a course and I do regular AMAs for folks that that buy the package. So go to velocity closing dotcom and you can actually download the whole kit right out of the gate today.
With that, we’re going to jump right into the episode. This is Luis Ceze, who’s a fantastic person who I was so happy. He’s the CEO and co-founder of Octo ML.
Not only have they got the really cool thing, they called the optimizer, which is a fantastic name for a product, but they’re doing some really neat stuff around democratizing and making highly performing an insecure machine, learning models.
Really, really cool. So check it out. Plus, the Beast talks a lot about building the business, the educational impact of where technology and is so much cool stuff.
Anyways, I hope you enjoy the show as much as I did. Hi, this is Luis. I am a co-founder and CEO at OctoML and a professor of computer science in Washington, and you’re listening to the DiscoPosse podcast.
You’re innocent till the days go by the. So this is fantastic. I do want to very quickly introduce you as your company is doing some really neat stuff.
And of course, I say this as a as a precursor to what you’re going to tell us or for the people that are listening. We hear ML/AI and it becomes like this wash that it’s assumed that it’s like, you know, they always say no one believes what’s actually going on.
I’ve dug in and I’m excited about what you and the team are doing. So I wanted to lay this of this.
You have really are solving a very genuine and interesting challenge. And I can’t wait to kind of figure out how you got to solve these problems.
So anyways, me take it away.
Let’s let’s introduce you to the audience and talk about where where you’re from and how you got to begin the OctoML story.
That sounds good. Yeah. So I have a technical background. So most of my you know, I guess intellectually active life has been in computer architecture, programing languages and compilers. You know, I’ve my speech to the University of Illinois. I spent time at IBM Research before then working on large scale supercomputers like bludging, you know, primarily applied to life sciences problems. And at the University of Washington, where I’ve been almost for 14 years now, it’s kind of crazy to think about my research career.
There has been focus on what we call the intersection of new applications, new kinds of hardware and everything in between, you know, copilots programing languages and and so on. In about five or six years ago or so, we started looking at the problem of, well, the opportunity. I would say that based on the observation that machine learning is getting popular super fast. Right. Because, you know, machine learning allows us to solve interesting problems for things that we don’t know how to write direct code for.
Like, for example, if you think about how you can write an algorithm to find cats in the photograph, it’s really hard to to write the direct code for that. But, you know, the machine learning allows us to infer a program, learn a model from data and examples. Right. So this proof has proven to be really powerful and machine learning is permeating every single application we use today. Right. So but anyway, so six years or so ago, we started thinking about, well, there’s a variety of machine learning models that people care about for computer vision, natural language processing, you know, all the time, series predictions and so on, and a variety of harder targets that you want to run these models to.
This includes CPU’s, GPS and then accelerators and FPGA and DSD and all sorts of compute engines that have been growing really fast. So you had this interesting cross product.
If you have lots of models and lots of hardware, and how do you actually get them to run? Well, where you need them to run, that includes the cloud, the edge, you know, implantable devices, you know, with smart cameras, all of these things. Right. So and one thing that’s interesting to note in this context about machine learning models as computer programs is that they’re very sensitive to performance and they’re very computer hungry, the memory hungry, their bandwidth hungry.
So they need lots of data. They need lots of compute, therefore, making them perform the way you want them to perform, to be able to run fast enough and or use, you know, a reasonable amount of energy when being executed requires quite a bit of tuning your performance. Right. So that means that if you look at the machine learning models are deployed today, they’re highly dependent on hardware vendor specific software stacks like Nvidia with their GPS has cooled down and included a Khuda stack.
You know, ARM has compute led, Intel has indicated and you know, and then software at the height of it is have their own software stack in general. So this is also not ideal because then that means that from somebody who wants to deploy machine learning models, they need to understand ahead of time where they’re going to deploy, how they can deploy and use some custom tools that typically aren’t super easy to use. And that might not even be a software stack for the hardware that you care about.
That works well. Right. So long story short, the research that we started with a version of six years ago was to try and create a common layer that maps the high level frameworks that people use. Think of the data scientists use, like Tensor Flow, PI talks and so on, or numpty and bridge that to a higher targets in an automatic way. So we don’t have to worry about how are you going to deploy it, create this clean, open uniform layer that automates the process of getting your models from data scientists to production?
Well, this seems like yeah, it seems like a good idea and people would agree. But there’s a lot of challenges there, right? Because the way machine learning models are applied to. They rely on hand tuned, low level optimizations of code that really means like understanding the model, understanding the hardware and tuning the low level codes to make sure that you make the most out of that hardware. Right. So that takes a tremendous amount of work.
It’s not sustainable. So the research question that we start exploring was, can we use machine learning to optimize that process? So essentially use machine learning to make machine learning faster on your chosen hardware. And that’s what that was the that was how the Tensor Virtual Machine project was born. So we started this project six years ago, five, six years ago. And fast forward to today. It’s top level Apache Foundation Software Foundation project called Apache TVM has been adopted by all of the major players in AML, including the Amazon, Microsoft, Facebook and so on.
It’s supported by all of the major hardware vendors. It is actually the de facto open standard for deploying models on a bunch of hybrid targets. That is open source, right? So, so armed, for example, adopted to VMD as the official software stack. So AMD is building with we talk to him about, you know, support for AMD CPU’s and chips on on Apache to VM and then other companies like Xilinx, which makes upgrades and a bunch of other nascent hardware companies are using Apache to VM as their preferred software stack.
And just one final sentence, and I know this has been going on, but I just thought know there’s no there’s no rapide way like this.
This is a super important understanding of how we got to even the start line, which is even before where we are today.
Right, right. Yeah. So anyways, and then TVM has been adopted both by end users and and hardware vendors. And the way to think about EVM in one sentence essentially is this compiler time system to form this common layer across all sorts of hardware. And think of it as a 21st century operating system for machine learning models that runs in all different hardware. Right. So that’s Apache to it has almost 500 contributors from all over the world, has been adopted, as I said, by all the major players in the industry.
And we formed talk to him about a year and a half ago to continue investing into PVM, all of the core people around Apache to VMware, co-founder of the company. So these are three PhDs in Washington. And another co-founder, Jason Knight, was head of software products. And Intel left until the time to join the company. So I’ll come out today. It’s about 40 people. We our mission is to build this machine learning acceleration platform to enable anyone that a very automatic way to get the models deployed in a hardware that they want without having to fiddle or with, you know, different software specs or having to tune low level code to deploy your model.
Really, we are about automation and democratizing access to efficient machine learning because the tools today require quite a bit of various skills. So. And I think that’s where we really want to begin, is that every, you know, abstractions are generally done because it allows for obviously diversity of platforms above and below the line.
Wherever that abstraction layer is, the appropriate abstraction is a fantastic place where platform begins.
Then even further up is how do you organize a commercial entity that can create an additional value.
Even beyond that is a is really amazing because, you know, especially in a niche area like this where you look at look at the folks that are contributing to TVM and to who are obviously well down the road, you know, people are thinking that smell is coming like it’s already here in America anyway.
But so beyond the abstraction now, there’s that optimization which makes it you know, we’ll talk about the optimal approach to it. Maybe give a sense of what does a non optimized machine learning model do relative to an optimized one, because I think this is it’s hard for people that don’t get it to understand this.
Yeah, great scale.
I love that question, Erica. So so the UN optimized version typically means that you have you get a machine learning model and you run it through, say, tensor flow defaults, deployment model or Piotrek, and you choose the CPU or GPU. And most of the time what you get is not deployment ready because it’s not fast enough or uses too much memory or doesn’t make the most of the hardware and so on, or you don’t get the throughput that you want.
Or if you’re deploying the cloud, it’s too expensive because it uses a lot of compute. So now if you run that through TVM and just what it will do is it gets that model and then generates an executable that’s tuned for that specific hybrid target that you that you’re going to deploy it into essentially generates custom code, uses its machine learning magic that we can get into if you want. But basically to find the best way of compiling your model into your hardware targets, to make the most out of your hardware resources.
And the performance gains can be anywhere from two or three acts all the way to 30, 40 X. Right. So we have so if you look at our conference, for example, we had a conference in December for the past three years. There are cases of folks showing that there were up to one hundred up to eighty five X better performance. And we talked about anything above 10 X.. It’s not a nice to have it’s an enabler. Like if you make something five 10x better, you enable something that wasn’t possible before because it’s just too slow or too costly.
And that’s the level of performance gain that we’re talking about here. So and this can translate into enabling anybody that before was too slow to deploy. Now you can deploy it. That reduces costs in the cloud because 10x faster means 10x cheaper to run in the cloud and so on.
When this it also helps to answer the myth. I would believe that there is a hardware specific machine learning unit.
Well, there are obviously hardware specific iterations.
Each model, each data set based on scale, size use, you know, there’s a lot of factors that even the most perfectly designed physical unit with a broad set of use, whatever is going to be, whatever the right combination of things, may not be appropriate for every model.
Right.
So this is an beyond this is not like there’s like a really good gaming laptop and a really good you know, machine learning is at any it doesn’t take long before you get to the scale of using machine learning before even a machine learning node is not optimized for your particular model.
Absolutely. As another way of saying that, too, is that even if you have fantastic cadre, you know, and numerous resources, if you don’t have good software to make use of it, you know, it’s just no good for you. The question is, you know, it takes quite a bit of work for you to massage your model to make the most out of a hardware target. Right. And it doesn’t mean that all heart attacks will be appropriate for all models, but by and large, it’s dependent on very fairly low level, sophisticated engineering required to get there.
So and that’s what we all about. Automating a doctorate now. So you.
You have me curious and I’m going to ask you to go down the rabbit hole right away. How do you possibly at a code level through software to models on the fly based on hardware?
This is like my I’m lighting up at the idea of, like, getting Texaco’s you need to because I would love for folks to really get a sense of.
Yeah. Where those challenge is being solved.
Great coup. Absolutely. So let me just start once upon a time now. I’m not going to be that long. No, but I mean, well, you know, fundamentally, machine learning models, by and large, are a sequence of linear algebra operations. Think of it as multiplying a multidimensional data structure, but not think of it as a matrix matrix, vector multiplication, matrix, matrix duplication. But sometimes more than two dimensions would be imagine a three dimensional matrix to call the tensor.
Right. So in general, like a generalization of that, it’s about another 10. It’s a lot of linear algebra operations. Right. So now these are very performance sensitive because they depend on how you lay out the data structure in memory because it affects your memory. How can your cash behavior that depends which instructions you’re going to use in your process. Because different processes are diffuse. They have different instructions that are more appropriate than others. Like instead of doing a scalar where you multiply one number by a single, the nobody could use a vector structure which which applies to vectors at a time.
And there’s so many ways there’s literally millions, potentially billions of ways of compiling the same program into the same hardware. But among the billions of possibilities, some of them are vastly faster than others. So what do you have to do is just search, right? So given a program that’s your your and now model and give the higher target and there are billions of ways in which you can compile those, how do you pick the fastest one? OK, so now to answer your question directly, how do we use an amount for that search?
Well, the brute force way and I’d say the less smart way of doing this would be to try all 10 orbiters of possibilities. But the problem there is that you don’t have time. Imagine making a variant of the code, compiling, running it, even just that, even if it takes a second each, you’re talking about centuries of computes to actually you talk about centuries worth of time to actually find what’s the best program. Where Emelle comes into play is that as part of how TVM operates, TVM starts up when you create a new harder target, you runs a bunch of little experiments that builds a machine learning model of how the hardware itself behaves.
So this machine learning model is then used to do a very fast search among all the possibilities in which you are going to compile your models how to target among all of those possibilities, which one is likely to be the fastest one? And that can be vastly faster. Think of it a hundred million times faster than trying each one of them. So now you enable this ability of navigating the space of configurations in ways in which you can optimize the model and then choose the best one.
OK, so now a machine learning model has a combination of these. So we just apply this subsequently of every layer of a model and then we compose and see how they compose and run it through the prediction. And then again, we validate like are we doing a good job in the way we do? That is by doing the full computation and running of performance tests and comparing are we doing better? Yes. And we keep the search going. Does that does that give you a general idea of how to do that?
It does.
And this is the. It’s the interesting challenge that we have with anything that’s any long running process, even if, like, just think of just traditional batch computing where where folks live, a massive long batch.
And at some point your you know, let’s just you know, for folks remember the days of the overnight jobs, right. So they’d have some four hour batch that would run. And you’re five hours in and something’s wrong. And there’s the difficulty of assessing.
If I stop now, optimize correct code, do something and then rerun, is it more worthwhile to do so versus just letting it run out? And it’s going to take twice as long as I expected. Like, that’s a relative number that I think a lot of folks would remember, even if it’s like a five minute script, if it takes five minutes and it should take 30 seconds, you know, makes sense.
But like the the scale at which you’re talking about, number one, to the initial problem of like where we’re going to go and use a model against a mass data set, it’s going to take.
Potentially hours, days, whatever is going to be significant. But then to run scenario, run that scenario. Repeatedly before triggering it effectively defined the most optimal place in which the host exactly is in fact identifiability to the right.
Yes, I think you could run those running, running parallel simulation modeling.
This is you anybody would think, oh, of course, where you’re going to use machine learning, like, well, you’ve now got an inception problem, right? Your effect, you have to do something that’s incredibly complex. To solve an even more complex problem, but. It seems untenable for people to imagine that because this could be done, so this is why this is this is like, yeah, it is how we do it, that we use this machine and others to make it.
So now we actually let me let me go and ask a question and ask her, like, what do we offer as a company? What is what is the commercial story here? Right. So anyone TVM is opensource, right? Anyone can just go to a package to PVM GitHub repo, download the code and run it. But Psyllium takes, you know, to set it up because he has to set up harder targets. And then you have to collect these machine learning models that predict how the harder behaves.
And and, you know, it is a sophisticated tool that it works really well. But you do it does it does require quite a bit of lifting to to get going in the context of of an end user. Right. Well, we did talk to a man who has built a platform called the OCTA Mizer, which is a fully hosted software as a service offering up to Veniam that automates the whole thing that has a really nice graphical user interface.
We can upload models, choose your harder targets, click the magic button, optimize, and then a few hours, maybe a day later, you get an executable just deeply optimized for your your higher target of choice. And the way that this is different than the experience using Cavium, as I said, it’s much easier. You don’t have to stall and do anything. There’s no no code required. And you just literally upload the model, choose the target and download it, or you can use an API.
But also the optimizer has no models and has pre a preset set of these hardware targets. Optimizer is built on machine learning that are ready to go for use, who don’t have to go and collect yourselves just because you can be protected from days using the optimizer when this is what I think is incredible.
And I spoke with somebody very recently and we had we just we’re just I’m enthralled with this idea of where we are today. And, you know, now at twenty twenty one, as we record this, it’s like.
The accessibility of both models and training data to like, if you wanted to try and get into the business of machine learning, just even to dabble with it, to get the hardware, to be able to have some data, to be able to the like the one or one layer of machine learning was very low level, like very simplified, and there was no access to go beyond and really test it. So now because like what you’ve got with the optimizer, like I said, you’re shipping stuff that’s there and it’s ready to go, so you don’t even need to then wear it like so like those first steps are incredibly challenging.
And and this is what I want to impress upon people that there’s effectively no reason why you wouldn’t just get started because it’s been done for you and it’s accessible to you now. Like, it’s it’s a wondrous time where we can do these things, because for all of the things that people are worried about, you know, one, I don’t understand complex mathematics. So how can I do with machine learning? Well, it’s not necessarily.
Not exactly. It’s about abstracting those away. Right. Right. And secondarily, how do I know how do I learn to trust what machine learning does? The only way to do it is to get in and see it. It’s a weird because machine learning has this really odd thing.
Even when we talk about A.I., sometimes it’s I describe it as like the scene from The Matrix when, you know, when when the Oracle system, you don’t worry about the face. And then he says, what? And he turns around, he knocks a base off the table and she says, what a really make your noodle is. If you would actually would have done that if I hadn’t told you about it.
And when we explained machine learning and like what you get, like you said, how do you find a picture of a cat? How do you tell the difference between a blueberry muffin and a Pomeranian? Like there’s all of these things where. There’s people don’t trust the outcome because they saw a meme about it one day, but you can dove in. You can test it out. You can put data through it. You can see Output’s.
It’s it’s there today because of what you and the team and what what the community is doing around this stuff, which is pretty amazing, right?
Yeah. And I want to pull that thread for just a minute on how the machine learning models, which, you know, there’s a whole sub field of machine learning, which is about explainable A.I. or excitable machine learning models to get people to trust and more. But I would even start by saying that how do we trust software to let’s forget about machine learning. Let’s just think about software the way we trusted by saying, like, we’ve put this much time testing it and you will have some confidence that it’s likely to work on the scenarios where your users care about.
We do not do formal verification of all software today. You don’t want to formally verify Excel or Oracle or Microsoft score. Basically, you just if you test it extensively and then you have confidence that it behaves the way you expect it to behave, and then you put a checkmark in any ship it machine learning models, they are that way too.
You have a training session, you have a test set, you train it, you test, and then you can do all sorts of ways of actually get more serious to the test. But, you know, that is going to work within the set of inputs that he was, you know, certified for, tested for. It works well, right. So, yes. So then you can go all into to this a huge fun discussion that we could have at some point.
Probably not. Now on how you explain to humans what is it in a machine that humans would trust them better.
Right. So and yeah. So that might involve compromising performance. It could be you might want to choose a model that’s not just as fast, but at least when you look at internally it works. You can explain to humans it might be useful for, say, medical diagnostics where you want a doctor to see, like, you know, this kind of like generally looks right to the decision tree. Here it is. Right, right. So then we can help with those cases, too, with integrating the optimizer, because if you choose to use a model that’s not as fast just because it’s more potentially trustworthy, we can help you recoup performance by giving you a highly optimized version.
And this is where, you know, I would say that the people that realize the difficulty that they’re facing, I’d like to get like, how do we get better at machine learning? You brought up the most perfect point. We don’t we just broadly trust software as if it’s like if it’s linear in its ability to scale. We were like, oh, I can almost run as fast as the machine. So they’ve we’ve just it kind of we grew up with it.
So we don’t distrust it as much as we we don’t necessarily trust it, but we don’t distrust it. Machine learning and quantum and the idea of being able to scale far beyond human capability. There’s this really odd. Case where the distrust is greater than the trust. And even though there’s no fun, I mean, this is I mean, effectively, it’s a lot of the core and the fundamentals of like behavioral psychology, you know, because of the way that we we place bets, the way we we think about, you know, outcomes versus efforts.
It is really funny or peculiar, I should say, you know, to see how people behave, but yet when they see the outcomes, like you said, they’d be like, oh, OK, now that’s fine.
I make sense, but. When you go one step further, which is especially the folks that are going to be, you know, customers and folks that you’re talking to. They’re further along where they know the risk of, you know, yeah, and the benefits outweigh. Yes. So they but the benefits to outweigh but the benefits outweigh the risks. Right, exactly. And also, I mean, trust the kind of stuff that it’s I guess it’s a kind of property kind of feeling that it takes a while to build, but it’s very easy to lose.
Right. So it takes a lot of work to build trust. And it means that investing, learning how you can live with it for a while and it works really well. But then you make a small change because models evolve fast and then that one breaks and he makes you lose some trust in it. But, you know, that’s just part of how it is.
And I feel like given the strides made in machine learning, research and getting models to be more trustworthy, more explainable together with all of the machine learning systems work, which is all we focus on in making these models perform and run well in the real world, I feel like very, very quickly going to we’re going to trust them just as much as we trust software and, you know, things that are really transformational to our lives, like self-driving cars, like automated diagnostics, like, you know, using A.I. designs, drugs and therapies and diagnostics is just such a special for us that the the progress that and the impact it has on human life is so far beyond the risks that he can cause, in my opinion.
You know, this may be philosophical, but I do think that in this case, the benefits far outweigh the risks.
So I’d be curious, especially because you’re obviously very close to it near you. You were you’re doing this in academia as well as in business. So you’re really tackling it on two streams, which is always amazing.
And I think that’s where a lot of the stuff comes from. But in fact, a lot of technology, amazing technology startups have been founded from academia and made their way into commercial business. And then those folks maybe get into venture capital. And it’s neat to see this progression.
But, you know, there are very few people that most people know.
And I wherever the descriptors of most or many, but who they could look to in and get that first understanding of the impact and importance of machine learning on society. Obviously, one that I know off the top of my head, of course, Cassi Korsakov.
She’s with Google and a fantastic person that truly does a lot to sort of share. The human side of of the value of machine learning and, you know, it’s neat to see those stories. So I’m curious, Lewis, who in your peer group and yourself included, like, how do you how do you get people involved and interested in the potential that we have as a society because of machine learning?
Yeah, great, great question. The way I think we get people interested and excited about is just by continuing to show the kind of problems that we can solve, the kind of new applications that we can build with with machine learning. Right. So let me let me take a recent example, seeing all of the progress going on on this large language models based on or three, for example. I mean, the ability the ability of summarizing text is fantastic with generating new tax is great to help you draft these technologies.
Just seem like magic. They work really, really well. And and I think that has the potential to amplify ability to understand large bodies of tasks of texts. Right.
So, for example, some of my colleagues and friends at AC2 here in Seattle had been working on these tools that help one understand how bodies of knowledge in a specific field. They’ve done this for covid recently, for example. I think it’s just really amazing applications that can capture the imagination and have a direct impact right now that really gets people more excited about it. I’m not sure that’s what you’re asking. No, I think it’s all about showing great.
But then, you know, just seeing the. So that’s one of them. The other one seeing the I know that we’re so far away from fully autonomous vehicles, but just seeing the kind of things that are ever more accessible electric vehicles from big ones like, for example, Tesla. I’ve that a model, a model three can do Real-Time Computer Vision and build a 3D model of the world around it. And you see, you know, the cars and people crossing the streets and then, you know, like this thing that is happening all the time.
It’s like, oh, this is a model. The car is actually agrees with them. Just as you get exposed to this, you get people more and more and more and realize how how exciting this is. So think about the applications that it enables.
And then a final one. It’s more academic. What’s becoming more top of mind today that I find particularly exciting and happens to be related to one of my personal intellectual passions of, you know, molecular biology and life sciences. I think that nature is a boundless source of two things, you know, mechanisms that we can use and molecules that can go and used to do beaches and things. And then second is all sorts of interesting problems that you can use a and the mouth to understand, you know, how nature works.
And it has tremendous impact on on understanding life and on understanding disease and understanding new therapies and so on. And there’s some things I think it’s fair to say that the strides that we’ve made in understanding, you know, gene regulatory networks and understanding, you know, a lot of life sciences processes would not have been possible without machine learning.
And right, so and. Yeah, so this has an incredible effect today, like, you know, how we can design a vaccine super fast, how can actually test it super fast? How can actually understanding do DNA sequencing of of of different people understanding? What is it? How did it correlate with things that you observed? I mean, this all boils down to it is enabled by conventional processes, largely based on machine learning.
And that’s one of the most you know, I don’t have the numbers handy, but I you know, I know it’s a good example to use, but as far as like the the the economies that we’ve achieved of time and scale is, you know, look at sequencing DNA, both the physical exertion required to do so on like hardware, the time and the cost in 20 years time or 10 years time even what it doesn’t take long to go back and see.
It was thousands of dollars in order to and and, you know, the amount of time required to do so versus now it is pennies on the dollar in effect relative to what the cost was not too many years ago, but.
Absolutely, yeah. And they should mention as one of the one of the research areas that I’m still active is on essentially using DNA for data storage, which would involve writing DNA and reading DNA sequencing. And this relied on on the progress of DNA. So I watch these trends very closely. And just to put numbers there, we’re really talking about, you know, the first human genome, the sequence. It was a huge landmark a couple of decades ago, actually cost over a billion dollars.
And today you can do a full you can do a full genome sequencing of, you know, under a thousand dollars, which is just you talking about a million faud, literally a million fold decrease, a million acts, decrease in cost. And then the amount of and this is all, by the way, enabled not only by no better understanding how, you know, it’s, of course, the genius idea of the next generation sequencing. But from there to today, a lot of it is really advances in computing infrastructure because it’s very complex, intensive advance in imaging technologies and optics.
Right. So and advances in machine learning, decoding very faint signals to read the letters that are in the DNA sequences, just. Yeah, all roads on the backs of Moore’s Law plus, you know, computing. That’s right.
Well, it’s interesting to see you as we come through. There’s a beautiful sort of readiness that’s arrived of all of these criteria. Right.
Like you said, you know, computational power, the understated scientific understanding, all of these things, they they move enough in effectively like a horse race beside each other.
And when one crosses the line, the rest cross very shortly after because one effectively carries the other. And there is this merger of things that has to occur to get then from there exponential increase in capabilities.
And we’ve seen so much recently and we as humans, we far overused the phrase exponential.
Right. People like us. And there’s a literally I talked with Joe back to you ahead of the founder of a company called Quant Gene.
And and he talked we talked a lot about that. And that’s but that’s their whole thing, is they’re using quantum computing and genome sequencing to find. Better ways to detect every kind of cancer, he says, but 10, 20 years ago, you would have a team of scientists and entire research area that’s focused solely on researching one type mapping, one type of cancer. And now, because of the ability in quantum computing, the ability we have in hardware, software and people and understanding, they can seek every possible type of cancer collectively through the research they’re doing.
And this is really like first principles like this is exponential growth in what we can do as an outcome because of the technology that we’ve enabled.
So what you’ve done and what you and the industry in your peer group and all of us are they’re doing is. Using first principles to do to set the stage for. An unlimited amount of new first principles thinking, going to do fantastic things, yeah, it’s a great point. And the way out outside this conversation back to what OCTA Mountain does is there are a lot of problems today or opportunities today, specifically in life sciences. For example, if you’re doing deep learning over genomic data that, you know, it’ll be without significant optimization would be beyond the reach of most people talking about problems that could literally take millions of dollars worth of compute cycles in in in cloud services.
If you could if you make that 50 X faster, the problem that takes billions of dollars cost in the tens of thousands of hundreds of dollars, which something that now becomes feasible and is also something that we’re very excited about this. You know, what we are doing is because not only do we make it more accessible to enable applications that we are doing today and make them faster and more responsive, but also the kind of optimization degree that we offer could enable things that would be beyond the reach of many today in application areas that are more custom, like, for example, what is life sciences when I think it’s one one great example.
So, yeah, and I think this is the fantastic opportunity that you have got now for your current and future customers is that it’s no longer about baseline achievement, but we can immediately begin to think of optimization versus that wasn’t accessible before. That just wasn’t it was just a matter of can we do it? And now it’s can we do this and are we doing it in the most effective and optimized manner.
Right. Yeah. And which which is often necessary like so to actually make it. Let me let me give you a without disclosing anything sensitive. You know, we’re being we’ve been working with customers that both deployed AML at scale and the edge and the cloud on the edge side. Think of it as if you had the machine learning model that helps you, that helps you extract help you understand the scene so you can replace objects in real time, say, for video chat, for example.
And then you have a you have that app running all sorts of events and devices like, know, different types of laptops, PCs or tablets or phones and so on. Once you have a model like that, what you have to do, what you have to do today to deploy it is by every single time you had to go and optimize and make sure that this is run fast enough on this and on on this ABC and any that different modeling that that is like, you know, it’s just really the unsurmountable of but not automating all of that, you know, which is what we do with the optimizer is something that is enabling, you know, the evolution of these applications.
And on the cloud side, you know, if you’re doing things like, you know, computer vision over large collections of of of images or video and to a large scale. So this could cost, you know, an incredible amount of money. If you don’t optimize writes, it means that you until you hits a certain cost target, you can’t you can’t even for companies that have deep pockets, that’s so significant what we’re talking about here.
So and it becomes the interesting conundrum of in order to test to see that your your model is effective and how long it’s going to take to run and what the optimization opportunities may be for it, you run it against your data set, but if you run it repeatedly against the same data set, it’s actually goes counter to the value that is dangerous if you continue to run like you’re not going to get expected results. And it may sort of skew some results if you send exactly the same data through exactly the same model over and over again.
Because they.
You do it again. Yeah, yeah, yeah. But usually it’s a wee.
So so that’s why effectively people are probably going to sort of throw up their hands and say, hey, you know, at least we know it works. We don’t know that it could run faster. So there was sort of an unfortunate acceptance up until, you know, what you’re bringing to the market that there just was it was just the cost of doing business in Emelle. Right. And that doesn’t need to be the case anymore, does it need to be.
Exactly. Does need to be the case. And these tools and he doesn’t need to be the case for as many possible users as far as as we possibly can. That’s why we strive for really easy to use and really making the level of abstraction much higher. So instead of you having to bear a super talented software engineer with a data scientist to go and do these things, going to be able to have the data scientists themselves to just go and use a tool that subsumes the need of having to work closely with this engineering team to deploy it.
Right. So, yeah.
Yeah. Well, yes, this is the thing of. We can now actually get positive business and societal outcomes instead of just technological outcomes.
I think one of my favorite things I remember Peter Tiel, he refers to he says we’ve we were trying to get Star Trek, but all we got was the Star Trek computer.
We didn’t get the tricorder. We didn’t get the transplant. We didn’t get the other things. All we got was the the computer that, you know, and and in fact, that’s the dangerous place to rely on. You know, we need to do things with these things. And this is why we are now at the point where we can really do amazing things.
Absolutely. And especially if you are a scientist. Right.
So I’m actually curious, Louise, what is a data scientists? Because I started to get different pictures of what that person is today, so if I’m an organization that I’m looking to hire a data scientist, what’s that profile of a person look like?
I’m curious in your experiences, given that you’re obviously very close to the field.
Yeah, no, that’s that’s a great question. Also, it’s a great question. And there’s just so many possibilities here. I tell it, say it really it really depends on what kind of problem we try to solve after data. Scientists tends to specialize in different kinds of data rights for different kinds of models. I would say that we should approach our, you know, see what kind of data you have to probably try to solve and go after.
Data scientist had zero domain experience because if you have some domain experience, you tend to get a lot better, you know, more predictive models and a lot better analysis out of the data that you that you have. What I think you actually focus people that say that the folks and people understand the problem domain and understand is the you know, the core tools in machine learning and data and analytics and statistics. Right. To go and work with your data now to go full circle.
Now, what I think is harder is trying to find a data scientist that can do that and also can do all of the complicated, ugly software tricks and they have to do to actually get get the model to or get the results to be usable as an end product. This is almost impossible to find somebody like that. This is why, you know, when we do, because somewhere early on in the life of the company, we’re doing some interviews to see what is it that we will be going after.
The number one pain points that we heard from folks that were running these things is that, well, you know, we have great data scientists and we’ve been doing better because the tools for the science are getting better and, you know, and there’s more. But now we have to go compare them was with very rare software engineering skills. And that’s what breaks the whole magic tear, because now you have the data and the data scientists just don’t have the rest of the resources to go and make their output be useful.
That’s where that’s where we started. Like, let’s just go to zero in on let’s automate the process of what gets out of the hands of data scientists and what should be the deployable module and get that gap and cover that with, you know, very sophisticated automation that uses machine learning. That’s really what the optimizer does. Right.
So first of all, my favorite name on Earth of a platform optimizer.
Sounds cool. I’m glad you like it. We love it. Yeah, the optimizer is definitely yeah. Every time I say like, it makes me smile. I’ve been saying this for over a year now and so I love. Thank you. Thank you for that Eric. So I hope I answered your question, but yeah. So how are you. Data scientist is I’m glad the tools are getting better, but it’s just so dependent on what kind of problem you want to solve that.
Yeah, it’s really about people understand the problem domain.
So it’s it’ll be interesting to see because I think we face right now as a society and businesses and governments is the sense that you’ve got to wait for the next.
We have to wait for the next batch of students to come up through the education system with access to the tools. So you have an eight to ten year cycle before people are actually able to do. And and in that amount of time, since we have so fundamentally change now, we don’t have to wait for that. We can we can train people in place. We can up level people where they’re at, through software, through technology, through capabilities.
It’s yeah, it’s an interesting point. I’m not sure if that’s where you’re going. And so it is a complete tangent here, but I think it’s fascinating to think about the role of A.I. and now in machine learning, it’s actually in educating humans. Right. So right. So there’s like ways of using AML to generate problem sets for kids to learn the ways of evaluating their kids. Yeah, so. And using that to actually train engineers. Right.
So the the potential for this stuff is just it is wondrous.
You know, there’s obviously there’s and I’ve talked with a few folks about some of the challenges around the ethics and biases.
And I and definitely I think what it’s superimportant, extremely important and tough on him.
I know I’ll ask you this kind of in your let me lean on your academia side, because I especially, as you put it, my Professor Hattab, my professor, had said you’re you’re very you probably that’s probably an area that where it gets dealt with or questioned the most. Is it through academia? We’re studying, you know, what are the potential like in business? It’s more like how do you, you know, broadly get this out in the world.
But we are finding through, you know, through thinking groups and through, you know, think tanks and through universities and the academia, like we are now at the study phase or continue to be when we’ll be for a long time in the study phase of.
How do we make sure? That we are as best as possible using these tools and this data. You know, it’s a real conundrum, because if it if it’s a representation of society, how much do we steer it in order to get what we hope to get out of it? Versus if a machine learning model gives you an output and we should there’s a reason it came up with that output we made up, trust it or understand it or maybe not like it, but it’s more like looking at how it got there than trying to, you know, stand at the output phase and then try and steer it towards a belief or an opinion, which is.
Yeah, well, this is a great question, super deep. And again, it could be a topic of a long conversation, but I would say that. Right. No, no, no.
But I’m I’m happy to offer some some thoughts here, because I do have colleagues and friends that I think about this for a good chunk of their waking hours, so. First of all, I mean, absolutely, we have to be mindful of biases in in machine learning, especially because of machine learning being dependent on training data. We need to make sure that the data is representative of a broad set of uses that’s actually equitable across all of the stakeholders in how this model is deployed and is aspects of the model architecture that should be training should be developed in the loop assuming.
And I think that comes fundamentally from having a diverse team. Right. So if you have a diverse team of people that are working on there’s a diverse engineering team or diverse team of of of data scientists as a team, that actually doing this naturally will point out deficiencies in training data and the architecture of the models. Just so with that as the people aspect here, that if we talk about machines doing more and more things, you have machines, you have people designing machines and designing engines that these people themselves need to be there.
Is this why I’m a firm believer of extremely diverse things? I’ve done that in academic teams that I’ve that I’ve built and know, pay a lot of attention to that at Octonal as well. That’s one thing.
And then the second thing is just through education. Right. So we have to keep bringing up these aspects of of of bias and make sure that he works for all the stakeholders, not just the machine learning, by the way, but in any engineering discipline. So there’s a friend of mine that once gave a talk and you to put his name here, but so about bias in machine learning. And he started with a great example. They might have heard that one of the very first photographic films that you heard that story before, photographic films that Kodak that essentially we’re talking about the chemical engineering thing like you designed the chemistry of our bodies and the way they designed the photosensitive material.
They realized that the way they were judging whether it was good or not was by, you know, checking this against a certain set of people with a different skin color, with a specific skin color. That means that if you actually use this in other skin color would just not work at all, which is not look right. And it was the case. So that means that they were they were biased. They set a great example that bias in how we evaluate whether something is ready or not for all the stakeholders is just not applied to machinery.
But any engineering discipline in this case, I thought was a really great one because it talked about something that, you know, on the order of a century old. Right. So it is just and then the way the tone of the film was and good for all of the callers. And it actually showed, you know, as one historical aspect of that. And that’s that’s true. You know, how you design you know how this affects building architecture.
There’s is it’s like a lot of things that humans use should have this thinking, not just machine learning. Right. Just that machine learning gets that extra aspect, because right now it’s enabling applications that it’s not a machine that gets extra attention on this because of how their applications is changing our lives super fast today, but also because so sensitive to data litigation is so fast that, you know, leads to a lot of misfortunes and, you know, let’s say missed opportunities to make it better early on in its.
There’s so much positive, but unfortunately, what will happen is the one the one negative story will be the one that becomes the focus. Quite often it’s like with anything. It was interesting. I was at a an event a couple of years ago says that it’s almost feel like it’s been that long since we’ve been at in-person events.
And it was a Canadian insurance company that had created their own their own call centers with EHI machine learning, all the stuff. And they basically fed it every single call that they’ve ever had taken with a customer service call and and trains this. And then they finally this was the moment where they said it as the to answer the next call. And it took the call and it dealt with with the person and they said it goes all the way through. And like, obviously they’re listening and monitoring like what’s let’s see how it behaves and it gets all the way to the end, solves the person’s problem in a perfect human sounding voice and gets all the way to the end.
And and this is the closing of the call. Then the machine says, is there anything else that I can help you with today?
And they said, yes, they stopped and looked at each other the like.
We that’s never been in a training manual. It’s never been there’s nothing that tells it to do that. But through all of the different calls and all the different ascertains that this was the best way. And they said then what was even funnier was the response. The person says, no, thank you, but I just want to thank you, especially because it’s so nice to talk to a human for a change right now.
I love that. Yeah, that’s it. That’s it. But this is the. There there’s going to be a beautiful call, like an augmented world, where we can leverage machine learning in these capabilities with like natural language processing and all these different things, we can use that like here.
But companies that are using it to detect, you know, emotional changes in people’s voices and they’re using effect to detect changes in their behaviors that, you know, could be for people that are at risk of suicide or there’s, you know, so there’s so many incredibly positive things.
And this is why, like I said, when so we have a friend in common, Amber Roland, who is, you know, you know, through your you she helps you with your PR and just fantastic human.
And she’s done a ton of stuff, you know, introduce me to the great people as well over time. And she’s like, every time I talk to her, it’s just like this, like, oh, yeah, here’s the human side. And she’s going to introduce me to people that are doing. Big things in when she said, I want you to talk to these folks, to talk to them. I had to race to the reply and say, I’m so glad that you did.
Yeah.
Now I know, of course, because like you mentioned, it’s tough. This is the tough part.
It’s hard to have hero customer stories because a lot of the customers you have, obviously, there’s going to be sensitivity and there’s and you’re ready, you know, in the in the birth of the company.
But you know what is maybe another quick example of a real human outcome that you’ve been able to see come to life. Well, yeah, great, so that we have several of them, right, so we. Let me let me if I can just pick up into, like, what kind of customers we work with today. Right. We have two categories of customers, one hour in which you learn the end users. These are companies that deploy that have products that use machine learning both on the edge and any of the cloud without getting into specifics.
I think of it as enabling much more natural user interfaces. I’d say that this is has, you know, a human outcome, because if you actually enable a new way of using voice, basing their faces in in very cheap, low, low end devices, you can buy them into more, more user scenarios and therefore have both add added convenience to people that are able and also add that ability to people that do not have, you know, that that are potentially disabled.
Right. So let’s say that is like a really nice outcome of just enabling more intelligence and intelligence. Think the edge is something that we have customers that we have enabled to do so. But customers are just machine learning and users and then also enabling hardware vendors that do not did not have a solid software stack to make their hardware useful for machine learning and then enable them to to to do so. But I’d say that in general, like the impact on human life, what we do is, again, one, enabling applications that weren’t possible before in terms of telling you the edge and also enabling these large scale compute problems that could be related to, say, life sciences, you know, that would not be accessible without the level optimizations that we provide.
So that’s how we got really proud of what we do in terms of the and, you know, and the impact in human life is we didn’t have any applications and even things that would be possible before. So.
Well, the the thing that I I try to remind people, too, is, you know, when we look at phases of of adoption and real life, if we look at sort of the hype lifecycle of so many things and we talk about edge computing for a long time and people still sort of struggle with what it means, but in effect, the the phone, you know, in a way, the phone you hold in your hand while it is a computer that’s stronger than the computer that since the first humans to the moon, it is an, in effect, an edge device.
Edge devices aren’t just raspberry pis that are glued to the side of a cell phone tower. They’re going to be computing. They’re distributed with different physical capabilities, different memory, different storage, different network, different CPU. And this is when. The ability to use decentralization, it will this is the again, exponential effect is that we can rather than taking collecting the data, they’re stream it back to central storage processing, essentially streaming it back the amount of bandwidth.
It’s it’s untenable. Right. And this is why being able to do processing and machine learning at the Edge is an amazing leap in. And what we need to do.
And this is what hammers home the value of what you’re doing, because there is no way that the model you’re going to run centrally is going to be run the same way at the edge of the hydra’s different.
Everything is. Yeah. So I love. Yeah, you said it exactly right. And I just said one more potentially overly dramatic point here, that which is speed of light is limited and light is fast, but you cannot make it faster. You know, if you had to actually have to go and you know, the speed of light is a limitation in in wireless, in any communication. Right. So not to justify this in any any communication.
So that means that some things fundamentally have to do at a very short physical distance to actually enable low latency and not having to rely on a long, long range infrastructure. This all of the hopes that he has to jump. So being able to compute and the edge has this fundamental enabling, like back by, you know, hard laws of physics that you must run this locally advisee continuous application. Right.
So, yeah, it also just enables low power, right? Yeah.
There’s this is the reason why people hate Bitcoin, not just because of most of the people that got in early and got rich was because of the physical impact it has on compute requirement.
And so there’s always this comparison of like, oh, you know, for every bitcoin you mine, it will basically you could power a city for a year or whatever it’s going to be had.
But this that is that’s a sort of a mythical historical thing. But beyond Bitcoin, when we look at. Yeah, using block chain, using machine learning all of these things to be able to do them on lower power, diverse hardware platforms. Yeah.
This is this is the Gutenberg Revolution of Machine Learning.
Wow. Thank you. All right. That was beautiful. Take the good. Agreed. Yeah.
And also to free people from having to even think about how they can deploy models because it’s just so that course like can even as I development to knowing how you going to be used. But how do you know. I mean there’s so many just think about mobile phones like, you know, there’s literally 200 different Android phones. So how are you going to tune for every single one of them right now? It’s just like this, a very small example when I just think about it came out as soon as he could run to the phone grid and a camera could run on a smart trained on a smart device and on the smartwatch and all of these things, just not having to worry about where it runs, could enable a whole wave of innovation.
Right.
So, yeah, this is so you must be excited. To be able to be both, you know, in academia and watching this world evolve and now you can very literally create the future through what you’re enabling at Octo Amelle, this is how good does it feel when you when you began this journey?
It’s got to be challenging.
And I say this like, obviously there’s no easy path to entrepreneurship and.
Yeah, well, thank you I for that question, because I used to present to Forsys how lucky I feel to have the team that we have. And I think that has one of the reasons that I think we have such a fantastic team is because of our connection to academia and the fact that we are a company that has a bottom line to to it’s you know, we have investors, we have customers have employees. And luckily, when we are in a very good position and that means that we’re not we’re not a research group.
Right. But we have a lot of we are really pushing the pushing the state of the art because we are a deep technology company. Right. So we are enabled by the fact that we had people with the products, that we build everything. But the fact that we actually had people that think on the frontiers of what’s possible with machine learning, like using machine learning to make machine learning better. And the connection to academia, I think is is really important and extremely synergistics and I would say essential to us because we are connected to the latest and greatest in machine learning models and the latest and greatest understanding of where even the hardware industry is going and what’s possible there, but also as a source of talents.
Right. So our company has incredible, incredible, incredible talent. We have more than a dozen PhDs in the team and a team of 40. Not that, you know, it’s just about that. But everyone is great. But I’m just saying that just showed the level that we are operating here in terms of pushing state of the art that we have a lot of people that, you know, operate like software engineers and making a product, but they all have a research mentality and research background and always think about how is it that how can I do something better than was done before?
Because that’s how a lot of folks have done research, you might think. Right.
So that’s and that’s very fortunate. Yeah. Yeah.
It’s always that tough metric when you like it. And I believe everyone should be proud especially to say, like, you know, we have a number of PhDs at it. At my own company. We have the same thing we talk about sometimes.
And it feels odd sometimes to say it depending on what the context. But the truth is, what you just said is that there are a group of people who chose to go above and beyond in order to advance something that had been done before that could be done better. And then when you bring a specialty machine learning of all the technologies and the things that we’re doing in the world right now. This needs those advantages for sure, thinkers who are willing to do what they did before and as a group, as a collective, and it’s also important that you don’t have one PhD because then having multiple.
Thinkers that way, people who’ve lived that life, they have the ability to use critical thinking as a group. To aim for the best outcome, not the right answer, the best outcome, and yet as humans, especially as entrepreneurs, we often get stuck with.
I’ve got the right answer and I’ve just got to teach the world that versus let’s as a group work with our customers in the community and the world and academia and come up with the the best outcome because it will be surpassed in future.
Absolutely absurd. Yeah, no, I love that that comment. And one thing I wanted to add there is, is that, you know, the path to impact and the time to impact the machine learning model in machine, any progress. And General is extremely short and a grand scheme of things. You’re talking about something that was in the academic world. People write papers about, you know, in January of a year could be by, you know, by by the end of the debate admitted that same you could be in production by people using it, like just this kind of like unheard of writing in terms of scientific disciplines, writing academic papers about and that having impact on people’s lives in new products within months.
Not not we’re not talking about years or decades, which is a typical thing that in a lot of disciplines you think about advances in life sciences, but at times it has an impact on diagnostics or or it’s just a long time like the future. Same thing in physics and chemistry. So I think many people I think generate for something that’s in production in March. You know that, right? So having this title of what the researchers and getting to see is really important.
And I think it’s a beautiful opportunity. Like, I love that people are coming because the dangerous thing is that if it only lives in academia and never makes it, if the same people that build, you know, take the concepts to the next level, don’t get a chance to actually be a part of the implementation of them, how do the how do we learn, you know, other than waiting for the next academic to come along and evaluate and analyze?
And like you said in the past, it would be a decade before you would see the results, you know, necessarily now that you can literally in academia work towards a goal, do you achieve your plan, evaluate, take the hypothesis, and now you can actually enact that hypothesis.
And as a commercial business, I think this is really, really cool.
Yeah, thank you. I completely agree. I couldn’t agree more.
So so, you know, one, before we close up, Lewis, I’d love to hear your thought. Eighteen year old Louis Sasi decided he was going to school, No one.
Did you imagine you were going to go to school as long as you did? When did you build your plan and when did that? When did when did today become part of that plan?
Well, that’s said you give me the goosebumps here. So just a quick personal company. I grew up in Brazil. I went to engineering school in Brazil when I was 18. I was an electrical engineering student at the University of Sao Paulo. You know, at that time, I, I definitely like I really liked research, was involved in some research, but honestly never thought that first I’ll become a professor, that is all. And then even though I would say that I had thought about starting companies at that time but never ended up not doing it because of those, I got into the academic world and research and, you know, left to Brazil to go to IBM Research to work on this machine that was working life science.
And after that I went to so that was very, you know, taking the next and the next opportunity. So where did that plan come together? I don’t think there was. I don’t think I was ever a point where the whole plan came together was I’ll follow, you know, the flow.
But I always had the North Star that what gets me up in the morning is intellectual excitement and working with people that I can learn from and admire. And, you know, academia is great for that. And at Oxford now, it’s been great for that, too, because, you know, it’s been a different dream to have this kind of team they were able to build here.
So I I hope that we find more losers in this world. You took kind of. Thank you. Well, thank you for the conversation. There’s been a lot of fun, and I hope to chat with you again.
So, yeah, absolutely. I’ll be excited to watch the growth of the team, the organization, your customer base here. Some of the stories. We’ll get caught up again in future.
All obviously of links down in the show, notes for folks that want to find where if they if folks want to contact you directly, Luis’, obviously they can go to Octoml.ai, can we have that? But what they want to reach out to you directly, what’s the best way to do so.
Yes, you can just go ahead Lewis at OK, time out. Right. I listen to him outright. You trust me and I’ll come back to you. Looking forward to hearing from your audience.
I also want to congratulate you and thank you for being an amazing intellectual who doesn’t use their university address when they run a company.
It’s I know there’s a beautiful pride in the Stanford edu or the University of Washington that it’s always amazed me to see someone use like a three year CEO of a company and they still use their university email as their contact.
And like you, you should be proud of everything is OK to email is the thing to be proud of everything.
You’ve got to take it down to be proud of. But thank you.
Yes. I’m very, very proud of our time out for sure. Yeah. This email address will be the only to now. We’ll be there for a long time. So this email address will be valid for a very, very long time. I’m very proud of it.
So judging by you and your team, I very firmly believe it will be so. Thank you very much for the time today, Louise.
Thank you. Thank you again, Eric. Wow, there was there’s a lot of fun.
Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!
Jo Bhakdi is the founder and CEO of Quantgene, creators of the Griffin Deep Genomics Platform, which combines Deep Genomic sequencing and AI to detect disease down to a single molecule.
Our conversation delves deep into how emerging and innovative work in genomics and cancer detection can profoundly change the face of healthcare, and what it takes to bring a truly disruptive idea and method to the world.
You will be inspired by the mission, and powerful lessons on what it takes to launch and scale a disruptive company. Thank you to Jo for such an amazing conversation!
Recent Comments