Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Sponsored by the Shift Group – Shift Group is turning athletes into sales professionals. Is your company looking to hire driven, competitive former athletes? Shift Group not only offers a large pool of diverse sales candidates from entry level to leadership – they help early stage companies in developing their hiring strategy, interview process and build strong sales cultures that attract the best talent for early stage companies.


Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win DealsClick here and because we had such good response we have opened it up to make the eBook and Audiobook more accessible by offering it all for only 5$


Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing


Does your startup need strategic technical content? The team at GTM Delta delivers SEO-optimized, compelling content that connects your company with technical users to help grow your credibility, and your pipeline.


Need Podcast gear? We are partnered up with Podcast Gear Pro to share tips, gear ideas and much more. Check it out at PodcastGearPro.com.


Tyler Browder is the CEO and Co-Founder of Kubos, the world’s first cloud-based mission control software.

Kubos’s “Major Tom” software is a cutting edge mission control platform for low-earth orbit satellites. 

This very fun chat delves into the challenges of creating a true “mission control”, the lessons of a founder, and also lots about how to build both products and a company.  Super fun discussion and thank you to Tyler for sharing time with me!

Check out Kubos here: https://www.kubos.com/ 

Connect with Tyler on LinkedIn here:  https://www.linkedin.com/in/tylerbrowder/ 

More about Techmill here:  https://www.hackntx.com/about-techmill 

Transcript powered by Happy Scribe

Ground control to major. Oh, hey, sorry. This is Eric Wright of the DiscoPosse podcast.

And the reason why we started in that fun little way is because this is a great conversation with Tyler Browder, who is the CEO and co-founder of Kubos. They are doing really cool stuff around creating cloud based mission control software.

So this is like the nerd heaven for me as a space fanatic and a startup fanatic and also just, Tyler is such a great human. We talked about Kubos. We talked about the approach to the problem they’re solving.

Why it’s so unique and how they got to this level.

The pivots of the company, their background to some of their open source work and also TechMail. Really great stuff that Tyler worked with around incubation in the area.

So anyways. Let’s just listen. This is a really great conversation.

Tyler is a super cool guy, but in the meantime.

Let’s make sure that you also help to make this podcast grow and continue to bring these amazing conversations. Number one, you can head on over to our YouTube channel, go to youtube.com/DiscoPossePodcast. Click it on subscribe and make sure you get signed up. Hit the like button.

Do all those things because we’re now launching, simultaneously, on video and audio. Really fun. Beyond that, of course, head on over to make sure you support your data because your data needs to be protected. And the only way to make sure that that’s going to happen is to get everything you need for your data protection needs. With our fine friends at Veeam Software, Veeam have been huge supporters of the podcast.

And I love it because I actually trust the platform. I trust the product. I’m literally married to the company. So very cool. But if you want to do that, it’s easy. Go to vee.am/DiscoPosse.

They’ve got a really wide array of stuff to cover you from Cloud to On-premises to Cloud Native. AWS reinvent around the corner as I’m recording and publishing this. There’s going to be a ton of really great stuff around there. So become an AWS backup hero. Head on over to vee.am/DiscoPosse.

And of course, speaking of protecting yourself in space and in transit. Protect your data in space. Go to tryxpressvpn.com/DiscoPosse. Make sure that you ensure that privacy is a human right. And I believe that it is so do that.

Go to tryxpressvpn.com/DiscoPosse, get signed up. I’m a fan. I’m a user.

Oh, and get Diabolical Coffee, diabolicalcoffee.com.

All right, let’s get to the good stuff. This is Tyler Browder from Kubos.

My name is Tyler Browder. I’m the CEO of Kubos. We build mission control software for spacecraft operations, and you are listening to DiscoPosse podcast.

This is really cool, Tyler. I want to thank you for first of all, doing what you do as a fan of things that leave the Earth. I really enjoy. When I saw your name come up, I thought, oh, all right. We’re in a cool space, literally. So for folks that are new to you, Tyler, do you want to give a quick intro a bit of a bio? We’ll talk about Kubos. We’ll talk about what you’re doing, what the team’s doing?

This is it. And I feel like you have, like, acoustic guitar playing, Major Tom, as we’re going through it. People will get why, what that reference is about in a few minutes.

Yes, there’s a lot to cover there. Let’s start with Kubos. Kubos is a software company, right. We live in a hardware world, though. Space is dominated by hardware, right? People did not get in this space to put little bits and bites in the space. They got in to build a physical thing and launch it and communicate with it. But we decided to come at a different angle. And so we built a product called Major Tom, which is a mission control software for spacecraft. So it lives on the ground.

It’s a cloud application that we use to track our satellites to understand the data coming down from the satellites and then tell the satellites what to do. Right.

So it’s the primary tool once the satellites in orbit that people use to communicate and understand their satellite. Right. So it’s a pretty critical, not to beat on this mission critical piece of software that, it’s a window that customers use to understand their spacecraft. So it’s a lot of fun. We don’t actually send anything to space because we’re on the crowd side. Right. We’re listening back from it. But we’re pretty close. My background, though you asked about is, I got quite a non traditional background into aerospace.

So most aerospace professionals getting in to the business because they dream to be an astronaut or something along those lines. And it was a passion from early on. No one stumbled into aerospace by accident. Except for me. So my background is primarily in just entrepreneurship, business development. I grew up in an entrepreneur family and so I’ve done healthcare. I’ve done music industry. I’ve done property rental companies, and I got an opportunity. I became friends with a guy who was a software engineer who had worked in space, and he was looking to start a new company, and he needed someone to handle the business aspects of the new venture.

And he would handle the technology. And, yeah, and I said, yes, I didn’t know what a satellite was. I was never like a big space kid growing up, I didn’t dream of being an astronaut, I dream of being a rock star. So, yeah, I was fortunate enough to stumble into the industry.

One would say that these days they’re one and the same. You see, the way they do the walkouts. It’s not like on WWE. You just expect someone to be walking with a flag and people cheering. And it’s amazing to think of just the amount that’s going on with both commercial and public sector stuff that’s happening in space. And then the private sector, there’s an untold number of things that are going on in this area of development that are almost, they used to be more hidden. But now let’s just say it right.

Elon Musk made it kind of cool to really sort of push the envelope and make it more of a spectacle to observe and enjoy that we are doing some incredible development in the world of space. And then we start to see what people are doing with the CubeSat side of the world and all these small commercial stuff and almost hidden behind that, too, is. That’s amazing. But what we’re doing with the technology that we’re putting there is even more amazing. Right. So this is why Mission Control, mission critical is big, because it’s not just about getting it up there.

It’s about, we’re building systems on this technology that require us to now treat it like, this is big. This is really amazing.

Yeah. There’s a lot of different ways you could go with that. From the industry standpoint. Historically, space has been a government playground, right. Like only governments have the resources and the appetite to go after it. And that’s all obviously changed. Right. And that’s good. But that’s created quite this, like change in cultures in the industry because government-run programs were very secretive. It was all about national security. And so there was this culture of not talking publicly about what we’re doing, except for a very select few propaganda type things or big name things.

Elon has definitely done more than his share to move the industry into the public light. And so we’re seeing this really interesting, when you get down into it and talk with people, there’s still this culture of keeping things quiet, not talking about what we’re doing. And there’s other people who are trying to fall in line with what Elon did, talk about their projects and be very vocal. And so we’ve seen that from a lot of different, really interesting angles. But on the technology side, when it was a government program, everything was really special. Right.

Everything was custom built to achieve one objective and up and down the stack. Everything from the spacecraft all the way down to delivery of the data, including Mission Control. It was a custom program that was designed just for the operation of that particular spacecraft. It could not be transferred. What Q-Set has done is give us some standardization and allowed us to build more in bulk. Right.

And build more spacecraft than we ever thought. Instead of really big closed crafts, we got lots of little ones. And so the way we really like to position our product is that we’re an infrastructure play. Every piece of machinery in space has the same core components. They all need power, battery, solar panels. They all need a computer of some sort of to control, and they all need a radio. They need to be able to communicate back to Earth and then they need some way to do whatever it is they’re wanting to do. Right.

And that’s where all the custom stuff comes out, there’s the camera, the pictures or if it’s some sort of censored measuring, some sort of data in the atmosphere or whatever. And so what we focus on is the generic part. So the radio, the computer, the battery power, what they call the telemetry of the spacecraft bus, as opposed to the actual payload. Our platform does not support payload data image processing. We don’t do that. That’s what our customers want to do is they’re secret sauce. That’s why they built the spacecraft to begin with.

But we handle the satellite operation itself to help assess where it is, where it’s going, communicating to the payload to take a picture over Cairo next Tuesday, whatever the command is. And so we facilitate that whole communication chain to the spacecraft.

And you’re doing it. And speaking of public in the open. The fact that you’ve actually open sourced a lot of the work. There’s a lot of interesting things. I’d love to get your take on what stuff is very sort of community, world driven, and how much is interior special sauce, even in what Major Tom and such is delivering?

Yeah. So it’s a great question. I think, actually, to answer that question, I have to back up a little bit. When we started Kubos, we actually started with a different focus. We were focused on flight software, basically creating the operating system of the actual spacecraft. And that product was called KubOS, and it is open source. And it was very much modeled after the Android operating system. And so we would have a Linux kernel. We have middleware that we built and a bunch of APIs so that customers could build their own custom applications on the spacecraft to do whatever they’re trying to do.

It was hardware-agnostic. We could really shift around, went to bus providers or satellite manufacturers and got them to distribute it. And we built that all on the open. We had an open source community. The code was all open source, and we did that for a couple of reasons. One, we believe in that that was kind of the ethos of where my partner, who was a software engineer, came from. I came from Mozilla and Red Hat and big open source commercial companies. And so that was part of who he was as a person.

But also, the truth is from an export control standpoint by making it open source, we got around a lot of the export requirements of the software, and we could distribute it without having to verify who was using it or having to keep tight controls around that. And as a small company, that was a really heavy burden to do the export control. And so open sourcing gave us a weight around that. Major Tom, we shifted to that last year heavily, and Major Tom is actually not open source.

It is just a web application that we control the source code. And there was a couple of different reasons for that and why we’ve done all that and we could get into that if you like, but just for clarification, Major Tom actually is not open source, and our previous product KubOS, it still exists. It’s still there being used by people today.

Yeah. And that’s what I wanted to show. That interesting split of the line. I do a ton of work in the open source communities and a lot of different ones. And I’m a huge proponent for open source and open communities. But I also recognize the challenge in running a business and also commercializing on open source. There’s a lot of real challenge around. You have to at some point add opinion into software. You have to have an opinionated approach. And it’s really hard to do in a purely 100% open community.

And there’s a lot of great proponents for, well, they call it cost commercial open source. And then Open Core is another one. It’s hilarious because you’ve got these little, like, Occupy Open Source, Occupy Open Core. There really are, like, hardened, really strong minded leaders in these specific types of communities. And they’re also arguing over who’s more open, who’s more DevOps-y, like, there are all of these things. And in the end, while that’s going on, we’re trying to run a business to employ people to get commercially viable software out there that can then power other companies and deliver this.

This is why inside Major Tom, there’s probably open tools amongst it, but nothing wrong with in my mind, the front end needs to be purely opinionated, pragmatically built and delivering to solve specific problems.

Yeah. I completely agree with you. Sure. Inside of Major Tom, we do have open source elements. I’ll be honest. I don’t know exactly all of those. I won’t name them but we do use them. Right. And I think most companies, software companies use open source at some level. Right.

Everybody thought they didn’t until there was heartbeat. That was like, one of the most hilarious things were like that’ll teach you open source people to use open source stuff. And it’s like heartbeat. And then all of a sudden, 12 hours later, Cisco, Microsoft, VMware, every major company was like, you need to patch your stuff. And they’re like, why I thought we were using commercial software. We’re like, well, guess what it’s built on open source software.

Yeah. Right. Exactly. The problem with, I completely agree with you. We still have to make viable businesses and employees that generate revenues so that we can hire people and have economy and all these things. Right.

But the problem we had with open source in our industry is we were selling support contracts. So that was our main business model, is you would use our software, we would sell you support. And that works really great for Red Hat. But that really is a challenge for us. So we were going after companies that were building large constellations. So they wanted to launch a lot of satellites, hundreds of satellites. And then we’re going to use our software on all their spacecraft. Awesome. Let’s do that.

So for the first satellite, they happily paid us for support and we support them through it. We built some, reported to their hardware if we need to, we do some services in there to generate revenue and we were successful. We launched some satellites on it, and then they would be ready for their next 2nd, 3rd, 4th spacecraft, and they’re going to try to increase the speed, the scale of it and bulk up a little bit. And we had taught them everything they need to know about the software, and they really didn’t fall on to purchase sub-port for anything because they didn’t change anything.

They weren’t intending to change anything or anything significant. And if you imagine once the spacecraft is in orbit, you have some limited options about what you can do to change that. If you have a bug that is in your spacecraft software, how do you fix it? You do a software update. Now it’s more common. When we started, assume that you were able to do a software update, but it’s very risky, right? If you do have to do a whole new update to the kernel or to the OS, that’s a lot of risk if you make a mistake, that’s it, the things done.

There is no hold down the reset button.

It’s gone. And so it is not something that companies traditionally have been wanting to do unless under the most dire situations. Right now, we’ve gotten better as an industry, we’ve gotten better at testing and our procedures and our backups on the system so that there is a failure, we could do it. But especially at the time when we started, that just wasn’t the norm. Very few companies have been building and architecting their system with the intent of updating the OS. So there’s some limitations, right? There were some risk involved, big consequences.

And so anyway, it was a very hard model to get in, and then they sell cycles. It was other than we’ve given in the business side. But anyway, there’s a lot going on here. But anyway, open source is still part of us. There’s still that flight software called KubOS. Still up on GitHub, and I think the next launch is on is next month. I mean, it’s still being used by people, even though we’re not actively developing on it. I think the next launch that someone’s using it is next month, I think.

I guess it really brings the ultimate question before Major Tom, what did the stack look like? What was the previous solution that needed this to solve a problem?

Yeah. So there was a couple of different flavors of this, but they all were based around being on a server in a closet there locally at your station. And they were all focused on particularly one spacecraft. They were not going to handle 100 spacecraft. They were really good at two, three, four, maybe spacecraft. But if you were going to do more, they were really not going to be built for that. And they were expensive. You had to have the hardware and you had to have at least a skill set to set up the hardware, manage that.

And they were real, particularly focused on again, the single use case as a single spacecraft. And so that’s really where we as the industry started to move from big and expensive to lots of little. Right. The mission control didn’t keep up, right. We couldn’t scale the way that the industry was needing us to scale. We couldn’t be generic, we couldn’t be spun up quickly and we couldn’t be updated very well. If it was in a hardware, if it was in the hardware over there in the corner. No, we want to touch it.

So there was a lack of innovation. Satellites, as you deploy more satellites, you continue to tweak and evolve them. There’s different generations trying to push it. But your mission controls stay flat. So we need a way to update and upgrade the software to keep up with the demands and the needs of the ever changing system. So that’s really where we came in to fix. We built it on the cloud to give it that scale, to build it in a redundant, safe way. We built it within mind of operating lots and lots of spacecraft.

We’ve done further than that. So not only just operating a lot of the same kind of spacecraft, we designed it so you can operate a lot of very different types of mission control systems. And then the other thing we’ve done that we’ve really gone out and integrated third party services that you use on the ground. Best example is talk to satellite. You need a physical ground station somewhere in the Earth that will collect the radio signal and also beam up the radio signal. There’s services you can purchase.

You can basically rent by the minute of these ground stations. And we, it was always on the operator our customers to spend the resources to integrate these systems. And they were done poorly. They were done slowly. They were done costly. And so we integrated these systems out of the box. So there’s just a simple login and then you’re integrated with this. So we’re lowering barriers. We’re going faster. We’re developing new features for our customers for these use cases that we can roll out and not have to do a full new reboot of the entire system and lose valuable time on their spacecraft.

So that’s where we’re coming from.

Well, this really becomes the value of centralizing and giving opinionated outcomes to solving a problem, because you can look at five customers and then find the Venn diagram of crossover and then start to merge the diagram a bit more. You start to see more commonalities, but they individually are building a standalone system for each part of the operation. It’s just such a, there was a point where we all had to do it. There’s always the first time someone built a car. You didn’t start by building a factory.

You started by putting a garage in and then building the bloody car, but eventually goes, hey, the guy down the street is building a car, and I’m supplying parts to him. And it looks like you guys use the same parts. Okay.

One of the things that we bring is the aggregation of all the different data sets. So we’re not looking at actual people’s data per se. What we’re doing is anonymizing it so that we can better understand spacecraft operations. Right. And really where we’re trying to apply this is in the communication optimization. So, example, you have 100 satellites orbiting the Earth. They’re all moving around. Right.

They orbit every 90 minutes. And you have ten ground stations across the globe. Right.

And the connection time between a satellite and a ground station is about ten minutes. Right.

And so you got minimum windows and they’re always moving. These are walking orbits, right. If it flies over New York at 02:00 p.m.. 03:30, it will be 50 miles east of New York. Right. The walking. And so what we are building is the optimization on how to communicate. And so we could tell our constellation. I want a picture of New York tomorrow at 01:00 p.m.. Major Tom will say you need to send it to this ground station to the satellite at this time and get the data back down to optimize the network, to get your data, your command up there to tell the satellite to do get the data down in the appropriate time and really optimizing the network.

So we’re moving away from spacecraft being these pets that we love and are part of our family, to cattle, to herds, to big networks. We’re really more network administrators than we are satellite operators. And that’s the way we’re moving the industry to adopt those practices and apply them to the space environment.

Yeah. I tell you, when you get to the numbers, it’s pretty incredible if you think about what’s up there in the different layers of atmosphere. And I saw something that’s funny to me because I recognized this is such a, like, get off my lawn type of old person yelling at the clouds situation. It was like these photographers who are like, it’s really bothering me trying to get night star photography because there’s all these darn satellites floating around, you know, that the Internet that you’re putting your awful angry tweet on is powered by those very little lights that you’re complaining that are crossing your photograph in a time lapse.

Yeah. That’s a really interesting conversation in the industry that we don’t know what to do with yet. Right.

We’re going to launch more satellites. We have to launch more satellites. We have to launch more infrastructure in space, not just satellites but space stations, and we have to build more habitats and we have to move out there. But there’s also some consequences to that, right? Not only with photography and a nice guy, that’s one. But there’s also the risk of collision, these things hitting each other and causing damage. Right. There’s that risk. There’s risk of, I’m a big fan of, I just went blank the, Apple Moon show from on Apple+.

For all mankind.

For all mankind. Yeah. And the militarization of space, right. This is a thing that is not that far away from us. Right. And then we got to get into governments and we got to get into laws and policies and treaties in space that we’re not well equipped to deal with right now in our current geopolitical environment. And so there’s some fascinating things and some really hard decisions that are going to have to be made in the next ten years to really set up our humanity to expand.

Yeah. The policy side of it is wild. And you think of because today we think of geography. We’re so just bound in geography, even just the fact that as a North American, the raw arrogance that everything that most companies do is in English only. And we base it on Eastern time zone. It is just crazy that that’s like the standard of belief as we head into just internationally on the Earth, we’ve got a broad set of audiences that are so underrepresented and under acknowledged. And then we can’t even argue over the height over a skyscraper that is considered owned real estate by that developer.

What happens when you go a lot higher? Does it belong to the country because it’s over North America? Does it belong to the country because it’s over El Salvador? That’s my satellite right now.

Right. It’s really hard that things can be solved. And then you go to the moon or Mars. And how do we break that up? Should we break it up? Should we not break it up, right? Asteroids are the same way in different countries, making different laws and not doing it as a planet as an entire group of people instead of just individually as our own countries. I think it’s really interesting. I really do. And how do we solve these problems and who’s going to take leadership in these problems?

They’re going to stick their neck out and want to talk about space policy, because right now, it’s not on the mainstream, right? It’s not being talked about at a high level with people who could do anything about it. It’s just professor somewhere arguing about it. And so we need to bring that out. We need to talk about that. Anyway.

Most people’s exposure to this is just they’re like, does Bruce Willis and Ben Affleck available and can Aerosmith do the soundtrack? That’s our understanding of space for the most part.

That’s right. That’s right. Well, in America, saving the day. Right. And that’s not how this is going to work. There are more countries over the last five years gotten into space, being coming space faring countries than there were ever seen. Everybody come play. The countries that had never had a space program now can have a space program. And it’s not just for the United States. It’s not just for America. It’s all global. Space belongs to all. It’s really interesting, but there needs to be some structure there needs to be.

If somebody’s doing something with their satellite, how are you going to know what they’re doing, right? Or should, you know, right? Do you even have a right to know? But that’s a different thing. And that’s really fascinating to me. We can track where satellites are, but we can’t always tell what they’re doing and sometimes by the behavior of the satellite, what it’s doing. I remember when I first got into the industry, there was a story about geo. So this is where the big communication satellites live, and they’re locked to the spin of the Earth.

So they always are focused on a particular spot over the Earth. Right.

So they’re locked in geostationary orbit. And these are very coveted spots. These are very big spacecraft. This is big spy stuff, encryption, military, but also other types of communications. And I remember there’s a story about this Russian satellite just walking around out there, getting in between the communication channel and you can see it. You do it as a Russian satellite. You know what they said publicly. We were the same way when we said publicly, we’re not really saying what our satellite was doing. And it was really interesting.

I think it’s going to become more and more happening whether or not we hear about it or not. But it’s going to happen. It’s going to continue to happen.

Especially just like, it’s hard to imagine that if we go back to the days of before this decade is about. We commit to getting somebody on the moon. And you’re like, that’s crazy talk now you’re like no one would even question. They’re like, why aren’t we already there? Why are we going back? Why did we stop going?

I think the tough part we also see with the sort of publicity of space touristy-ing and stuff that’s going on. On the back side is an incredible amount of research, like the work that you’re doing. This enables incredible amount of real secondary effect stuff that’s going on. And going on the moon wasn’t about planting the flag, it was about learning about science beyond our Earth that’s enabled an incredible amount of things that is just we forgot. We forgot that’s what we did as a result of it, even the sort of the rich man space race that we’ve got going on right now.

The result of those advances will mean that as a government organization, especially at least in the United States, they’ll save billions of dollars because of the work that’s going on in commercial and private sector work. And we all personally will feel that benefit because it means that things will come that are advanced as a result of this work.

Yeah. So we do that in a slightly different way. But it’s the same idea. Right. We borrow a lot of technologies, best practices, not from the space world, but from the software world, from the general, from what Google and Facebook have developed as standard practices for how to develop large data sets and manage those data sets. So we’re applying those just like that Google had to develop in order to build theirs we get to use in a space. It’s all how this works, right. The space race is happening with Elon and Branson and the other guy, Bezos is ultimately going to be, at least to the industry, at least from the economics, it’s going to be beneficial. Right.

They’re creating technologies and they’re training people, right. Giving them new ideas. There’s this whole, like, flood of SpaceX employees, not flood. Floods not the right word. But there’s a group of SpaceX employees who are spinning out new companies now, right. This is the benefit of what he’s really built. Is he built a big company to do something really amazing and trained and taught these engineers how to build really amazing things. Engineers are going to go build amazing things for themselves, and they’re going to create new companies.

Well, every major company has done this. And now we’re going to see in the space there just hasn’t been anybody, like, break through that, right? We’ve all been government contractors working in classified missions and couldn’t talk about these days. But now that’s over, right now, that’s ending. And you’re going to see a lot of that’s where the real next push is going to come from. Right.

SpaceX has done amazing, great things. It’s very impressive and pushed the ball forward. But now you’re going to see a different ball being kind of moved away. So they really focus on solving launch and then getting people into space in large bulk groups of people, mass movement of people, the people coming out of SpaceX employees who are spinning up their own companies. We’re not even sure what they’re going to do yet. And it’s going to be really fascinating what they do, right? They already did this. We’ll think what else they can accomplish, right? When they want to.

That’s it. And it’s like the accessibility of this stuff now is huge. Right. And I always enjoy everything we have now has, like the sort of ice cream flavors of, like, one scoop, two scoop, three scoop pricing structure. Can you imagine, say, ten years ago that you’d be able to say, I’m going to create a mission control software that I can offer on the cloud in a distributed format, API accessible. And I’m going to be able to offer it at, like, pricing to you. It couldn’t have been imagined that this was possible. And yet here you are.

Yeah. Well, ten years ago, who knows what I was doing ten years ago. So that’s even crazier, right? I don’t even know what I was doing ten years ago, but, yeah, there’s just pull and push in the industry, right. We’re pushing the industry towards cloud adoption, to using, borrowing from the software industry into space to move the industry forward, move innovation forward. There’s still resistance to that, right? The truth of the matter is we’ve talked a lot about commercial entities and commercial business models in space. Really taking off the largest payer of space services and applications is the US government, right.

That’s the largest payer. And so it’s still driven by requirements in that very waterfall manner. And so that’s what we’re trying to do. Educate and move the industry in a different direction so that we can continue to innovate faster and not be put in these boxes that were built for 1960 technology and practices. Now we can move it forward. But, yeah, there’s this really interesting pool. The commercial companies want to go talk to a commercial company about using Major Tom. They get it. They understand what we’re doing, and we’re moving forward them when I go talk to the Air Force about it. Maybe they do, maybe they don’t.

So there’s an education that’s happening still in the industry about what not just what we’re doing, but what is the bigger picture. Microsoft and Amazon have over the last couple of years really put their money where their mouth is and got in this space and are building a space building services in space, educating the space industry. So it’s coming. The cloud is coming to the space market, which we’re part of the leaders of that movement.

Now, when it comes to that sort of ideal customer, this is a really interesting one because you have a very unique customer set. What does the profile of somebody that you, now your average person goes and fills out a form for a free ebook, and then you ran off with an SDR, right?

Yeah. Well, we’re not doing TikTok ads. Not yet, anyways.

So our customers are, there are different ways to top up or categorize our customers. Our customers are very educated people who are very passionate, motivated, and technical, which means they’re not really interested in fluff, in marketing design. And they really want to know what’s underneath in the details and the architecture of our system. We have to provide that we have to be technically proficient in our software to explain it to our customers. And so that is something that is not unique to our industry, but it’s part of what our industry is, right? Made of technical engineers who are running, who have a lot of say in what technologies get implemented on their missions.

Pass that, most of our people are not software engineers, too. Most of our customers are electrical or mechanical engineers or system engineers. They’re not software engineers. We have to make sure we’re designing Major Tom in a way that is accessible to non software engineers. Right.

So we have APIs. We have some customizations you can do with our system. We have to make sure that we’re building that so that it’s accessible by someone who doesn’t know how to code, which is great, which is just acknowledgement of who our customers are, right. It’s not unusual to go to a space company and they not have a software engineer on staff. That’s changing, it’s becoming more prevalent in the industry that we have software engineers on staff, but it’s not a guarantee. And so we have to build Major Tom in that way.

From a different angle, our customers really, what we’re doing right now from a mission perspective, is we’re really going after two buckets. The first one is new companies who are wanting to launch lots of satellites to do some sort of business application. Right. Or even if it’s not, even if it’s a government program, just wants to build lots of satellites, wants to go quickly, wants to scale, wants to be able to update and manipulate and configure and integrate it into their system. Right.

So they don’t have an architecture for their ground segment to really establish it. So that’s where we really fit in really well and start building out the architecture around Major Tom’s APIs. The other segment is actually the exact opposite. It’s those who are running long term missions in space who are wanting to lower their costs. Right.

It’s gotten too expensive to have this server farm or whatever over here. It’s got too expensive to maintain this 2015 year old software application that nobody else works anymore. And so there’s a whole lot of risk that it goes down or there’s some sort of issue. And then obviously, COVID is changing the mindset of where we need to do work. A lot of these older programs, you had to be in that office only. And then now, that is changing. And so Major Tom really can insert itself in there.

So we are lowering the cost by moving it to a cloud architecture, pay as you go type thing. They don’t have to deal with the infrastructure from a hardware standard. We host it all. And then it gives you that flexibility of remote access to your spacecraft. And so that’s another where we’re going and using the flexibility how we design the back end, we can really insert ourselves into preexisting infrastructure, as opposed to building a new one around us. We can be flexible enough to integrate. Those are the two big Buckets new ventures and those who are actually the exact opposite.

Older ventures who are trying to be more economically driven, right. Or reducing the risk because they have a single point of failure.

Yes. In the world of tech, I often, we use the phrase legacy, and I always joke, you call it legacy, I call it production. Like, this is stuff that can’t go away. But like you said, wrapped around a traditional architecture littered with single points of failure. And they’ve basically built it so it can be asynchronous, we’ve got opportunities. And then you build the right abstraction in front of it. And this is what’s neat. Now, when we talk about abstractions and cloud as an architecture, it’s fantastic. Because now we can basically trust that you are going to do more than just fire all your services in the US-East-1 on AWS.

Like most people do, whenever people say to me like, yeah, we’re using the cloud for resiliency as to how many regions are using. Sorry, what? Oh, no. When Route 53 goes away, the whole kit goes down. We see these weird little things like, I don’t understand what just happened. All of my caching just went away. All these sites went down around the world. Like, what happened? Somebody is just like they typed in a bad command on some software update. So you’ve got the ability that you can architect for scale resiliency versus the traditional architecture people that they should be focusing on their outcomes, their business, what they want to do with their hardware.

But now they can say, hey, Tyler has a lot of customers that care about this. So if Tyler gets it wrong, a lot of people get angry versus if I get it wrong, I’m just the only one that’s at fault.

Yeah. So from our world, we have to take it one step further, considering those. So we still are governed under export control. So we live in an evolving policies place like everything else. We’re governed under the Commerce Department on export control, which is what it is. We’re also under certain situations covered under the State Department, under the ITAR regulations about arms traffic. And that’s a whole different level of scrutiny and consequences. To be frank with you, and so we do run an ITAR secure clouds, which does, we use Microsoft right now that we’ve built on top of the Azure government cloud.

We also live in the public cloud. We also can and have done air gap. Now that is a little bit. And the reason is because I described this push and pull in the industry, and I have to live in this industry. And so while I’m pushing the industry forward with cloud adoption and using best practices and moving this way, there are still missions that deem Major Tom from just a pure feature standpoint, but need and have to have it in a military encrypted. Sorry, air gap environment.

And so we do deploy in those environments. It’s not something we particularly like doing. And you are losing some of the benefits of what we built, but our feature set for the satellite operators in particular, for the actual day to day operations, not just the architecture, but the actual features are valuable enough on their own, and they’re willing to use it without the cloud infrastructure. So we run in lots of environments for better or for worse. We do have non US customers who prefer to have their stuff, not in the US.

They don’t want the data in the US. So we have to do EU deployments that works better for us because the data is not actually coming in and then back out of the United States export control. Where things live and what environments and deployments, it’s a constant challenge that we have with trying to make sure that we’re on the same deployments, that we’re all being upgraded the same time, that we’re all maintaining it. And yeah, that’s one of the challenges that we face kind of on a regular basis.

I’ll say it’s the economies of scale or one thing, and also the economy of innovation at scale. Right.

So like every organization that would come to you, they would have to do this from ground up.

That’s right.

You have a vested interest in becoming particularly good at doing stuff at scale versus they’re just trying to solve a specific problem and then having to build architecturally around the infrastructure to support that problem. You are truly this sort of the cloud computing of mission control, because you can say, you don’t need to care about where it is. Obviously, we do. And you have to be transparent about that. But they don’t have to run it. They don’t have to have this network operations center with 25 TVs and people up 24/7 watching screens and listening to bleeps and boops and wondering what’s going wrong.

Yeah, that’s right. Some of our customers still choose to have those 25 TVs and everything going on. They like it, but they also want the benefit of what we’re bringing. So, yeah, we’ve talked about the kind of architecture into the actual application itself. We rely heavily on automating a lot of these process so that we don’t have to have person sitting in a monitor 24/7 because satellites don’t sleep, they don’t take holidays. They’re always constantly collecting and transmitting data down to Earth, and there has to be a system in place to collect that, right?

So Major Tom can fly. It can be your autopilot, right. For these satellites where you used to have to have teams 24/7 operations, we can now reduce that human intervention at cost from both from the employees and from the individual. Nobody wants to be up at 02:00 a.m. flying a satellite. Right.

That’s not a sustainable model. So that’s really where we’re moving into the application side, giving these tools and automation, both from internally in Major Tom, but also giving you the APIs to automate your own workflows for operations. And so that’s I think that’s really another angle that we’re coming at this problem at.

It’s funny that because you’ve been very focused on this is where you run. This is where you operate. There’s no Edge in any of the nomenclature around what you’re doing, because you are truly sort of the cloud, like Mission Control is the cloud for the Edge payloads, the actual workloads that are physically swimming around in orbit. But it’s funny that everybody is kind of like, I call it the edification. Like, it’s really just everything and anything new. And like, these glasses, they’re Edge glasses.

Now everybody is just, like, latching onto it. First of all, thank you for not just jamming Edge all over your website to try and be exciting to the Edge world, not to detract from them. All of my amazing friends who are into the Edge.

Of course.

Where do you see that sort of next layer of compute coming? And is it something that you’re interested in as a company?

Yeah. So we have thoughts around this and trying to understand what our role is in this wave that’s happening. Right. I think one way that we’re looking at it is as we continue to develop Major Tom and we continue to build out new capabilities, being able to optimize this network, right. For communication that we’ve talked about. At a higher level, I think one thing we’re trying to do, which I don’t know if this completely answers your question, but fine, is erase the differences between space and ground. Right.

It’s all just one network. It doesn’t matter if you’re satellite or if you have a server down here or you have an IoT node in the Sahara. It’s all just a network in erasing that there has to be some sort of division from a network perspective. And so we’re trying to move the reliability and the communication of space to where we have on the ground so that we can run Edge processes anywhere, whether it’s on the Earth or on the ground, be able to shift these things around, manage this from this perspective.

There’s also a lot of push right now for satellites to become smarter that they’re not just simple machines, effectively. Right.

Really complicated, simple machines. Right. We want it to be intelligent. We want them to make decisions on their own and not be dictated from the ground. Right.

That’s a movement that’s happening. So getting the compute power on the spacecraft to allow it to do the computation, apply the AI or machine learning in real time at the data source, and then be able to make decisions and execute from that without ground interference. So there’s really two trains of thoughts on that, we’ve looked at because of our experience in flight software. We know how to go play in the satellite world, right. We know how to go put stuff on orbit. And there’s an element of that long term that has potential there.

If you control the satellite software and the ground software, it’s a really powerful ecosystem that we’re building. So using containers, we can really push the security profile, the new application on the spacecraft, and allow Major Tom to manage that system. So we’re looking at where we fit into this whole thing. It’s still new. We have different restraints with compute power on orbit with just actual energy. Right.

And so these are constant fighting, and then the heat that they create and getting distance. There are a lot more complications. So it’s not as fast evolving as it is on Earth, but it is there. You’re definitely going to see space companies with Edge computing all over their websites. We’re not one of them. But there are those companies. And so we’re working with our customers to understand their needs, what they’re doing so that we can be a part of their ecosystem moving forward.

Well, the irony is that you effectively, you’re like Edge hipsters. You were there before it was cool, like Kubos, in effect, is the Edge OS, right? Like you could almost say, you’re tagging to be, we’ve been to the edge and back, right. It’s like because you realize the problem that you could have the most impact in solving was that mission control. Right. But you’ve understood the other side. You understood the payload, you understood the Edge requirement, and that allows you to be so focused and very pragmatic and fanatical on solving this problem with Major Tom.

So at down the road when someone says, hey, we want to take this a little bit further and we want to move it to another location. You do air gap, you do all these things. You’ve had to think that stuff out and execute on it. It’s pretty amazing that the company could go in interesting places, for sure.

Yeah. We have the technology and the experience to go a lot of different ways right now. In the short term, we’re all steam ahead on Major Tom. Right.

Building this product to really manage the ground infrastructure for your spacecraft operations. This is where we are, where we go in the future. We have a lot of different visions that we want to see come to reality. And it’s pretty exciting that what we can do. Software is really going to give new life to these missions to this hardware. Once you launch the hardware, that’s what it is. With software, we can constantly when we build that infrastructure and do it in a safe way. We can give new life and new missions to old hardware.

And I think that’s going to change things. There’s a case we made that they’re just new server farms in space. Right.

Amazon is just going to move all there. And you don’t care if it’s on Earth or it’s in space as long as we can increase that communication to make the latency go away. But anyway, there’s complicated problems, big problems at stall here, and where Major Tom fits in the future. We’re focused on communication, communication bandwidth optimization. That’s always going to be a huge problem with FCC frequency allocations moving forward. People experimenting with laser communications. This satellite-to-satellite communications is now a thing that’s happening. And so I personally believe that the communication bottleneck that’s going to be happening here, that we’re already filling the squeeze up is a major place that we want to plant our flag, that we’re part of this solution.

We’re part of the optimizing and really the communication channels of this network.

Most people would just even think about that, and they would get out of their business. You’ve chosen some hard problems to solve. And I want to say hard or difficult or challenging, but like, making it commercially viable, this is a pretty incredible thing that you and the team have taken on. What made you think this is a problem I need to solve. And I think we can do it.

Yeah. So KuBOS is how we got in the industry. My partner really had the idea for the flight software because he built satellites, and he was trying to integrate these different subsystems that were built by different manufacturers to talk to each other, and they weren’t standardized across any sort of platform. He had to build it all from scratch. Right.

There’s a better way we could build a better system that already is integrated with the system or make it easier to integrate these systems. So KuBOS came from when we spent time in the industry understanding the customers and our partners in this industry realized there was a huge need for how they were doing operations. There was a need for the scalability for new practices, new architectures, new development speeds that we weren’t seeing. And so we saw an opportunity to build Major Tom.

We had the networks. We had the relationships to present this product quickly to people. And so we did. And we’ve had success doing that so far. I didn’t come from the space industry, and so I had to really dive in and learn it, kind of from an outsider’s perspective and operations. You have three major phases of a spacecraft life. Right. You have the development phase where you’re building it, design and building the spacecraft, testing the spacecraft. And you have launch. That’s a big moment of itself. And then you have operations.

Out of the three, the longest time period is operations. Right.

But which one is more costly? What’s the most expensive bucket? And so it used to be development and launch as the most expensive bucket. So the industry created CubeSats, they created, also, Moore’s Law created cheaper components and faster components. So we lower the cost of development significantly. Obviously, SpaceX has come in and focused on launch problem. Lower that. But other companies like Rocket Lab have come in and done this to lower the cost of launch and the reliability and the speed of launch cadences. But no ones touched operations, operations of this long term expensive bucket.

And now is disproportionately more time and money than the other two buckets. So that’s really what we’re trying to solve. We do have tools for development and testing. But we’re really looking at lowering that so that if we lower all the cost of the entire life cycle of the spacecraft, then we will make space more accessible. And while that’s kind of a token thing right now that people want to democratize space, it’s kind of almost becoming cliche. Say, the truth is, if we can get the price down, right, this is going to increase development if we use skill sets that already exist in the world.

Like software engineering is a huge skill set that has changed our world completely, and we apply it to space, and we give them more accessibility to these skill sets, see what else we can do. There are more software engineers entering space, more software engineers building software or building software companies in space. So it’s just great. Anyway.

It’s a beautiful empowering loop. Right. And if you don’t mind, we got a few minutes left. I want to touch on TechMill and the ecosystem and your participation because, like you said, you weren’t born in the space race, but you’re in it now as an entrepreneur, what are the ways that you see excitement in that startup community and where we can give back?

Yeah. So TechMill started before Kubos. It was a nonprofit in the town in Texas where I was living. It was a bunch of technology and entrepreneur enthusiasts got together and decided we need to create some sort of organization nonprofit to help other entrepreneurs give at least a community feel to us. So we did events. It was actually the first co-working in our town, started a coffee shop. And we moved to an actual co-working space, and it spun off and done its own thing. That’s actually where I met my partner who started Kubos with.

He was the President of the organization. I was the Treasurer, and we started working. That’s how we met. That’s how we started working together. Kubos was born out of TechMill to some degree. And so it’s a nonprofit that’s still existing. They do like developer evangelist, education community, building a community of people who are interested in tech, who are interested in startups. When Kubos was taking off, when gaining traction, I stepped down from the board of TechMill so I could focus on Kubos and I’m now no longer in Texas.

I’ve actually moved to Portland, Oregon, at this point. So TechMill is doing great. But I don’t have any involvement in it and haven’t in a couple of years.

But it is amazing if you think of communities of purpose and there are so many out there, it is a beautiful thing. Ultimately, you are exactly the success path that any community of purpose should have, is that you shouldn’t be running it for 30 years like a lifelong member. If you can contribute and be a part of it is one thing. But you ultimately create something. You sort of parachute out of it into a new thing and prove that the value was there. And then somebody else says, hey, check it out. Tyler used to be our guy. Now that gives them something to aspire for, right?

Yeah. TechMill was a really interesting point in my life. I was coming out of another company that I just shut down. Wasn’t technology driven. It was a service based company, and I was looking to get into tech. It wasn’t space for today. I was looking to get into tech, and I needed new networks, and I needed new people to meet than what I had been exposed to. And so, TechMill, I went to just a community event being put on about people just wanting to share big ideas, right?

Don’t matter the context. I went there and they talked about creating this conference for technology people, for software engineers. And they were looking for volunteers to help run a conference. And I volunteered. So I think that’s a really great line in my life, is that I’m not afraid to do things I don’t know how to do. I didn’t know how to run a conference, but I jumped in anyway, that led me to start a nonprofit, which I didn’t know how to run. And it led me to meet Marshall to build a space company that I didn’t know anything about space.

It’s just a continuation. But you’re right. So TechMill has thrived and has done a lot of great things and support a lot of different startups. The company Kubos being one of them. So we have a special place in our heart for TechMill, but that is really what it’s supposed to do, incubate a little bit, give you some resources and connections and then kick you out. So that’s what we did. I did it with myself. And so that has worked out so far.

Yeah. And those things right there. And I think for folks that are listening, too. It’s just a reminder that there are great communities of purpose like that, that you can go out and whatever it is, they’re out there. And it’s very helpful, at least just to find people of the birds of a feather sort of opportunity, and it gives you a chance to share your ideas, to let them out with people. And if nothing, you just meet amazing people. Obviously, the in person thing has the lack of in person opportunity has drastically changed how we develop and nurture these communities, because it’s a lot harder.

Like we’re tired of staring at bloody Zoom screens and everything all day long. The last thing you want to do is like, hey, I spent all day on Zoom meetings. I’m going to go to a three hour evening Zoom session with people. I hope that we get to the other side of this all soon, and we can get back to those things. And you’ll see a lot of interesting stuff come out.

Yeah, I agree with you. It’s been a challenge, but yeah.

So I guess for folks that want to find out more and want to get connected to you. Tyler, obviously, we’ll have links to. First of all, there’s so much that’s going on, and I didn’t even talk about the super launch sequence you’ve had. August was a huge month for you. You’ve got customers that are doing incredible stuff. I feel bad that I didn’t open with that because I was excited on your behalf for all of the stuff that you were involved in, and that’s really cool. But for folks that want to get connected, what’s the best way to do that?

Yeah. Our website is www.kubos, K-U-B-O-S, .com. So that’s a great place. We also have a podcast there that you can listen to. We’re interviewing other, our customers or our employees and giving you an insight into kind of pushing the cloud adoption in our industry. Yeah, that’s great. I’m on Twitter if that’s a thing, but I don’t talk a lot, but I’m there. So. Yeah, our website is the best place to get a hold of us.

And students as well. Right. There’s a great opportunity. You’ve got the academic access path. There’s different ways that people can get involved, which is pretty cool. Thank you for doing all that you do.

Yeah. I appreciate it. Thanks for having me. Giving me the opportunity to speak to your audience and share my story and what Kubos is doing. I think we’re really in an interesting place right now.

Onward and upward, it’s going to be. I’m excited to see the future where you got a lot of good stuff in it. Thanks very much, Tyler.

Yeah. Thank you, Eric.

Sponsored by our friends at Veeam Software! Make sure to click here and get the latest and greatest data protection platform for everything from containers to your cloud!


Sponsored by the Shift Group – Shift Group is turning athletes into sales professionals. Is your company looking to hire driven, competitive former athletes? Shift Group not only offers a large pool of diverse sales candidates from entry level to leadership – they help early stage companies in developing their hiring strategy, interview process and build strong sales cultures that attract the best talent for early stage companies.


Sponsored by the 4-Step Guide to Delivering Extraordinary Software Demos that Win DealsClick here and because we had such good response we have opened it up to make the eBook and Audiobook more accessible by offering it all for only 5$


Sponsored by Diabolical Coffee. Devilishly good coffee and diabolically awesome clothing


Does your startup need strategic technical content? The team at GTM Delta delivers SEO-optimized, compelling content that connects your company with technical users to help grow your credibility, and your pipeline.


Need Podcast gear? We are partnered up with Podcast Gear Pro to share tips, gear ideas and much more. Check it out at PodcastGearPro.com.


Rob Hirschfeld is CEO and co-founder of RackN, leaders in physical and hybrid DevOps software. He has been in the cloud and infrastructure space for nearly 15 years

This is a special episode with Rob returning as the guest for his 4th podcast and for the commemorative 200th episode!  We discuss how to unlock the power of multi-cloud automation, the challenge of human ops, and how we are finally reaching an “overnight success” of true bare-metal provisioning and multi-cloud automation and operations.  

Follow Rob on Twitter here: https://twitter.com/zehicle 

Check out the awesome work by RackN here: https://rackn.com 

Subscribe and listen to the 2030.cloud podcast here: https://soundcloud.com/user-410091210 

Transcript powered by Happy Scribe

  1. Wow, that’s right. 200 episodes. You are listening to the 200th episode of the DiscoPosse Podcast. My name is Eric Wright. I’m your host and holy moly. This is really kind of crazy and awesome. I really just want to say a big, huge thank you to all of you who’ve listened and to all the amazing folks who make this podcast happen, including the amazing friends over at Veeam Software. So give a shout out to them and drop a visit. Go to vee.am/DiscoPosse. They’ve been fantastic supporters of me, my whole community of creators here.

So thank you to the Veeam team again, vee.am/DiscoPosse. Not just because they’re great. They actually have the best data protection platforms in the entire universe. That’s my opinion. So go check it out. And on top of that, if you want to celebrate 200 amazing podcasts, you’re going to need to stay awake. How do you do that? You drink diabolical coffee. That is because it’s the most devilishly good coffee and we’ve got the most diabolically awesome swag, including really cool stuff, which is coming up for the holidays.

So get on in. Some really cool slick mugs their showing up there. So go to diabolicalcoffee.com. And one last amazing thing because not just your data needs to be protected, but your life, your data in transit. The best way to do that is to make sure you use the fine folks at ExpressVPN. I’ve been a fan of VPNs for a long time for a variety of things. First, functionally to protect your data in flight, in transit, wherever you go, because I travel a lot.

And on top of that, going one step further by making sure that you can do cool things like testing for different locations and locales and testing latency in your network when you’re doing web testing. I’m a big fan of doing that. So do that. Do that thing. Go to tryexpressvpn.com/DiscoPosse. Again, that’s .tryexpressvpn.com/DiscoPosse That’s it for the live reads for this one. And speaking of live reads, this is live and awesome. Well, it was live when I did it. I guess technically every recording is live when you do it.

But this is Rob Hirschfeld. Rob is a good friend. He’s also the founder of RackN, the inventor of Cloud. Oh, yeah. You’re going to hear about that story. So I think this is really worthwhile to jump on in. Thank you to the folks who do this thing and support this podcast. Make sure you share it. Click subscribe. Go to Rob’s site at RackN. Check out the 2030 Cloud podcast. Also fantastic. And with that, actually, the funny thing is it’s just the episode for yourself. There you go. Rob Hirschfeld on the DiscoPosse Podcast.

Hello, this is Rob Hirschfeld and you are listening to the DiscoPosse Podcast.

This is the fun part because I get to do the intro. You’ve actually done your voice for Binger before. I’ve been lucky enough, Rob. Now we’ve talked a few times on this and I wanted to have you on because this is super special for me. First of all, to thank you. You are one of the inspirations to why I do this. I kind of go back to sitting in Austin at OpenStack Summit and me with my crazy weird USB dual mic set up, just trying to put something together, and we got to first sort of meet and spend time there, actually at the summit.

And obviously we’ve run a lot of miles, both in the tech circuit and quite literally on the ground at these events. But this is 200th. I had you on for my 100th episode, and this is 200th episode. So that’s why it was perfect that we got a chance to put this together. So thank you for inspiring me both in business, in life. And of course, the podcast is the third piece of that. It’s been a wild ride.

You’ve been a valuable friend, and I’ve been enjoying. It’s fun, because with podcasting, you get to listen to people talk vicariously. And I love what you’ve been doing with the podcast and sort of where you take it like conversations you have.

I’ve been lucky enough to spend a lot of time with you. But for folks that are new to you, let’s have you do a reintroduction and I’ll tell people go back and catch. I think we’re at, like, four podcasts we’ve actually record together on my side and a couple on your side here and there as well. But let’s give them the full meal deal on Rob Hirschfeld.

It’s interesting because I’m about to celebrate 20 years of inventing the Cloud. That’s one of the claims of fame. I sort of keep on the downlow, but Dave McCrory and I need to get out and tell people a little bit more about it. We started a company over 20 years ago now, where we were the first people doing virtualization in the data center at any reasonable scale, and we filed some patents on it that are about to be expire. We won’t have to worry, but we never made any money from them.

They got locked up by startups and then the Quack acquisitions and things like that. But yeah, so I’ve been doing the data center automation and virtualization business for a long, long time. So it’s very true to the theme of what it means to do virtualization and data center operations at scale. Like you said, I got really involved. I was at Dell and got really involved in OpenStack at the time when everybody was worried that VMware was going to take over the Cloud and Amazon was a nuisance, not necessarily the Juggernaut that it’s become.

And then, well, believe it or not, seven years ago, RackN is now seven years old. We left Dell with this sort of idea that OpenStack was going to have trouble because there weren’t good operating paths, which is sort of what we’ve seen play out. This was pre-Kubernetes, like, I was involved in Kubernetes early on, and actually, I saw the same thing with Kubernetes and was concerned about the operational patterns, too. And so the theme sort of for me, career wise, and then RackN specifically, is that companies aren’t running infrastructure well.

RackN set out to say, all right, how do we help companies run infrastructure better? We always had this idea that you’re not smart enough to run a data center is amazing marketing from Amazon’s perspective. What’s crazy to me is that so many people in our industry just go along with it. The HP’s and Dells turn around and be like, oh, well, I guess our customers are too stupid to use the gear that we sell them. And that’s always insulted me at this sort of foundational level.

Even the OpenStack stuff that we were doing always sort of got in the way of, like, oh, of course it’s going to be hard to operate. That sort of goes with the territory. And even with Kubernetes now, I was just listening to Brian Gracely with the Cloudcast, and he’s like, Well, Kubernetes is really hard and complex, and we accept that. And so it strikes me as a problem in our industry that we allow infrastructure to be so hard to operate. And we spend a lot of time talking about, like, needful complexity versus inherited complexity versus collaboration cost.

That’s, my bad. So we’re at a point now with RackN, sorry for the long intro, but we’re at a point with RackN, after seven years where we’re doing significant business, global scale operations, we’re breakeven profitable on the business, which is great for a startup and sort of seeing things working the right way. And now we actually have to tell people what we’re doing.

Yeah. You’ve got three more years and you’ll be an overnight success. The typical is the ten year mark where you’re suddenly like, ‘Why haven’t we seen this before? We’ve been here the whole time’. You should have seen us. We’ve been at every event, we’ve been contributing in code, we’ve been contributing in community. We’ve been contributing in our voice.

And there’s a perseverance that’s required to do this and a bootstrap on top of that. So that’s a big deal for people to do that.

It’s been crazy. I think some of it comes back to letting people catch up with your vision.

Yeah.

There’s definitely things that I’ve watched us do that make our vision as more accessible. But I’ve also watched people catch up to the vision and that’s, I think a lot of times with startups, if you’re having trouble communicating the idea, it could be that you’re wrong or it could be that you’re ahead, right? I mean, that’s what my virtualization experience was. We knew VMs were going to be essential for running a data center in 2000, but we spent so much time telling people hey, these VM things are real, and you should use them, and they’re better than hardware infrastructure for this purpose. That by the time we’d won that battle.

We lost the war from a startup perspective.

And talk about another bootstrapped example in the VM world, right? Literally vMware. I hadn’t even realized until not even that long ago that VMware was originally bootstrapped. They didn’t go get VC. I was like, what? But we look back on it now, and it’s kind of funny that just as a momentum that they have today, that everything started with sort of breaking the mold on human belief in technology viability and the trope of we can’t use virtual machines because we need hardware performance.

We can’t use the Cloud because we need data center protections and security and controls. We can’t use Kubernetes because our applications can’t live in any femoral environments. You show me a can’t. And I’ll show you a start up opportunity. It’s really wild to see this transition over to your point. The vision is there and the perseverance to maintain that vision and execute against it for long enough for the industry to finally understand that. Okay. Yeah. This is a thing, and it’s tough to find people. Erica Windisch is one of my favorite examples.

Erica has gotten to the 90-yard line of 100-yard dash, like five times in a row and then finally got to the finish line because for a variety of reasons, had never been able to see something to fruition. And she was able to do that with IOpipe and went to a successful exit. And I actually haven’t caught up with her in a long time I should. Again, because she’s just such a fantastic person.

Yeah. I remember this at OpenStack Paris fighting an early Docker and saying, this is a big deal. You need to pay attention. And the struggle of being able to explain why something is important. And this is to me, part of my journey from being a technologist to being a CEO is understanding why and how to explain the business value of what you’re doing. Because as technologist, we all want to be like, this is shiny and pretty, and it makes this easier. And that’s enough of a reason. But it’s not enough and we need to accept that just because something better or easier or the new thing, it’s not necessarily, what going, to actually become a success.

That’s always a challenge for us. It’s taken us a long time to be better at expressing how much the complexity of what people are building is a actual problem. You run around in tech circles, and it’s like how things are so complex. I’m scared of the complexity. I’m worried about the complexity. I started doing this stuff about a year ago on Jevon’s Complexity paradox. You’re not familiar with Jevon’s paradox. It’s org technology thing that we need to understand better about when you make something easier or cheaper.

People use more of it. And so about a year ago, I was convinced that we have a complexity paradox going on where we’ve made it super easy to use cloud services or things like that. There’s no downside. There’s no apparent cost in that. But we’ve now made that hiding complexity has made it everything much more complex and complexity starts bubbling to the surface. And like the Amazon downtimes where one service fails and the Cascades to their whole infrastructure, we see this pattern over and over and over again.

Or then you offload your services to a third party who uses the underlying services in Amazon. So you’re hosed anyways, right?

We are like one step away from Amazon going down because they had a third party that depended on a service that was in Microsoft that depended on a service that was in Google. And the Google service failed because the time got out of sync or the certificate. The certificate wasn’t updated when it was supposed to be updated.

Certificate. That’ll be what takes us all down. It won’t be DNS. It’ll be some goofball who didn’t set his calendar to renew an SSL Cert.

We can actually predict this with 100% certainty. It’s going to be an SSL Cert that expires. That depends on a DNS entry where the person no longer has control of the DNS, do the record that’s necessary to sort of create and renew the certificate. And so that’s going to be this cascading failure. But it’s totally conceivable that the Clouds actually have interdependencies on each other that they don’t fully don’t anticipate. And that should scare everybody. The challenges that being scared of the complexity of the problem and understanding the actual cost of that complexity and why somebody would, from a business perspective, pay money.

But it’s really more simple. It’s really take action on the problem. This is what it always comes back to. If you’ve identified a problem, how do you motivate somebody to take action to fix the problem or to change direction or things like that? Right. And that’s super hard. People are busy.

We need to come up with assisted menu heuristic. This is the ability to relate to them. That the problem that they’re creating by adding with a DIY solution is actually greater than the value. And ROI on investing in, like, technical debt is just such a throw away phrase that we attach to something. But it gives us a free pass to ignore what’s actually happening and identify it. And it’s sad because you and I talk all the time about this stuff and we see it in real environments, day in, day out where you just celebrate the heroics of complexity.

And some of it. I’m starting to think about terms like complexity budget. So, you know, I do this. We actually have 2 hours a week where we have people come together and talk about DevOps or the future. So this Cloud 2030 discussion group that we have that I started, like as a pandemic hallway track, and we’ve been going over a year, and then we turn them into podcast so people can listen to them. But we.

Sorry, my dog is, hold on.

Let’s talk about that after. But like the fact that what 2030 Cloud is now versus how it began, that’s actually quite an interesting path you’ve taken.

It’s stunning because we have a dedicated core. And then people come in as they want to talk about topics, and we identify topics. And what’s amazing is when you get a group of people talking about the future and infrastructure. Also week to week to week. These themes emerge out of those discussions that are just stunning. Right. So we talk about complexity or coupling or the legal ramifications of jurisdictional changes that could impact how technology is formed. The threads here are crazy. And there are some things that are super impossible to talk about.

Like we tried to talk about networking. Networking always double clicks down into infrastructure or persons or technology or jurisdictions like security is the same way. It’s super hard to sink into a simple security problem. And then the complexity comes back, comes in over and over and over again. And this idea of having a complexity, budget and understanding what you’re doing. The point that you were making about the sysadmins and the technical debt, though, is that a lot of this is organizational bias towards Siloed behavior, and it’s actually not just the organizations.

It’s actually the tools play to that, because that’s how you sell into market. So we are so used to operational silos, and then to sell a tool or a platform or product into an operational silo. You build tools that work for operational silos. One of the things that RackN’s done that I didn’t even realize we were walking into this trap is that we built tools that crossed operational silos, right. Because our goal, our customers goal was end to end operations. And I see this in conferences all the time.

You get people, the CEO or whoever is in charge of the conference. The big speaker stands up and says, I must have an end to end single pane of glass, one, one ring solution. Right. And you know, the ISO flashes in the background, and everybody sort of watches and they’re like, yes, that’s what we want. And then they leave that session and they go talk about their siloed tools and how they’re not going to act, how the network team is the enemy, and we have to fix it without them.

And so we’ve created this interesting situation where it’s very clear that you want an end to end solution. You need zero touch operations for us. Somebody’s reeling a rack in to a data center. Right. We do this for banks a lot, and we’re software. So the banks are doing it. We’re just making it possible. But you reel a rack in to a server in country somewhere and they turn on that rack, and they want that event to turn into working productive equipment inside of an hour, and then they want it to be completely the same process that they use every data center.

Right. Or if they need to reset the data center because they’re worried about ransomware or something like that, they can push a button, you’ll get a coffee and then come back and have the system all set, which sounds simple. But to do that, you’re actually talking about crossing 15 bank, 15 or 20 different organizational silos to get all that stuff to work. Right. And it’s a super hard problem, not because you can’t do all those things. It’s a super hard problem because each silo resists integrating with the other silos. It’s one of things that made Cloud a big deal.

It’s like, oh, my developer can set up a network because the Amazon APIs have networking. My developer can set up a compute system. Yay. Doesn’t mean they’re doing it in ways the networking wanted.

Right. Yeah.

The thing that you think about from all those perspectives, though, is that we’ve incented the industry to build silo, silo, silo, silo and tools to do silos, and then we haven’t created the incentives to connect the dots. Right? I mean, DevOps conferences are full of people crying on each other’s shoulders about how misunderstood they are.

I’m sorry to be pejorative. I’m not trying to be pejorative about DevOps conferences, actually. The way it goes, it’s like we need to talk about the culture that would allow me to work with another team. And then they have say that, and then they go in the next room and they’re like, these are all the reasons why I can’t work with the other team.

Right. You tell them that you’re an ops-focused person, and I pulled this thread the other day, and it had the precise effect that I thought it would. I actually said that your GitHub heatmap is actually a meritocracy, right. Because I meant it in the way that I’m often presented by people all the time, that if I’m doing infrastructure as code, and I’m dabbling, that the moment that I go to a DevOps conference and pull this thread. Pull that. It’s not, again, not talking negatively on the DevOps commerce, but the audience there, the community that’s there, GitHub heatmap is sort of like a great vendor T shirt to them.

It’s a thing they wear proudly and a thing that they show off. And so when you get there and you don’t have that, you don’t necessarily have the skills to walk into the room that screams about inclusivity, and then you get shoved out the back because you didn’t write a Perl script, and you don’t know who somebody else was at one point in time. I feel that sort of battle, like Gartner at their recent event. They talk now about XOps, which was, I rarely see something that I find kind of cool about some of the Gartner stuff because they have to be careful and generic with a lot of things.

They’re talking about predicting ship building, which it’s a really tough thing to the level they’re working at. So they talk about XOps just like DevOps, AIOps, MLOps, ITOps, NetOps that each of these silo breaking methodologies has created its own silo, and we need a cross breaking silo create, like, we need an abstraction layer for the silos that have really been meant as abstraction layers to silos.

And this is actually a hat tip to Gartner because they’ve really been doing something that we think is a good description of this and thought Werks has done it too, but they call it infrastructure pipeline or continuous infrastructure automation pipelines. We consider them automation pipelines. They’re actually showing all of these things fitting together, and it’s different than value stream mapping, which is similar. It’s like I need all my teams to work together and understand how I generate value. It’s important, but they’re actually elevating it to say if all these silos they need to be connected in the pipeline like a CI/CD pipeline.

But for infrastructure. And we found that nomenclature incredibly helpful for this. The difference being that what we’ve been doing with RackN and Digital Rebar, our product, is we’ve actually built the infrastructure pipeline as a platform, whereas the.

There’s thunder going on in the background, you can probably see the lightning in the window.

You’re in the midst of a good Texas storm.

I got my UPS and I should be set, but definitely much needed rain.

But the idea here that I can run a workflow all the way across all these pieces as a platform is actually a critical thing. When Gartner shows that they’re like, and I’ve got 20 different tools I have to use to connect all these dots together. And the lift on that organization is super high, and the complexity that you create is super high. So we’re excited to see a name for it. The infrastructure pipelines concept, which people seem to sort of get intuitively.

Like, okay, I got CI/CD pipelines for code. They don’t really work that well for infrastructure. We can talk about get ups and how that’s sort of this very narrow band of things, but it doesn’t really work for infrastructure. So I need a pipelining system that connects all these tools I’ve got for infrastructure.

It’s like Jenkins for your hardware. When you can give it a name and a relative example. I’ve totally stolen your infrastructure pipelines. When I talk about stuff through the stuff my team is doing at work because we’ve got the app pipeline, which people are totally they get like, it makes sense. There’s both application and infrastructure pipelines, and when it comes to doing things around decision automation and infrastructure automation, that’s where we’re seeing the more of it come into play, which is originally it was like, just do the thing like the hypervisor manager will be the layer that people work with, and so we’ll attack it there.

But we’re finding more and more is that no, they’re using some kind of a pipeline to manage that abstraction layer, and they’ve moved away and they realized the true control plane is the human control plane, which lives in pipeline, and pipeline is manifest it’s physical human run books that we’ve played out for all this time, and now we can actually relate it into product. And this is why I’m on team RackN. I’ve been for a long time on this.

Thank you. It’s interesting to us, and it’s useful to bring up the human run book piece of this because we do want this end to end component. And one of the things about the pipelines for us, because we’re a product company. So us building a platform that gave somebody a pipeline would be a pat on the back, but it’s not our objective. And actually, this is worth explaining. What we try to do is we want the pipelines we build to be reusable and standard. And I watched this, and this goes back to RackN formation history. We used to do in time with Sheff, switched over to Ansible. Right.

And all those tools are great, really good, actually, but they aren’t designed for reuse. What we see in the industry is and Terraform has the same thing in spades. It’s really a challenge. We see people using the tool, but in similar ways, but not with shareable components. Like you get a Terraform provider, but when people build like a plan to talk to a piece of infrastructure, those plans are not typically reusable. They’re not decomposable. Right. So you might have three teams using the Terraform to interface the same Cloud, but doing it in different ways and nobody can audit it, nobody can check it.

It becomes really a problem. And that’s where the pipelines breakdown. You can’t build a pipeline easily. If the things that you’re building the pipeline on top of don’t have a degree of standardized interconnect between them.

This is the one thing just stick there to pull on the Terraform piece, like even in their own docks, they’re very clear to tell you this is a bad idea. If you are doing data interplay between external systems, it’s not going to go well. You’re creating rigidity and things can change, and then your run book will no longer be valid. I respected that they put it in there, but like any good stuff, you put in a documentation, it’ll never be read, and people are still going to try and work on it.

And you and I have talked about this before, right? The pattern in Terraform is it is a single source of truth and Terraform easy to pick on in this case. They designed a tool that has a single source of truth embedded in it that assumes it can actually control the environment, which is handy if you have to build an environment. But infrastructure changes outside of before and after your tool runs, and even in between the runs of your tool, the infrastructure changes. The idea that the state is controlled by Terraform is a failure at the pipeline level because pipelines are part of a flow, and so things happen before your tool operates. Things happen after your tool operates.

And so in building a pipeline, you have to have this idea of an incremental state and your state has to be adaptable. So if you’re messing with the infrastructure, you have to expect that something might change outside and you can take that information in and say, oh, look, I just learned this, and there’s a ton of cases, especially in configuration where you like you build a cluster, and the keys for that cluster aren’t known until the cluster is built, right?

You might get a token or security or generate a certificate. That’s what makes Kubernetes so hard to install. It’s not Kubernetes. Kubernetes is a simple go binary that could run as System D with ten line install command. But what makes Kubernetes hard, it’s the fact that you have to generate services for every if you do it right for every service that interacts with it, and then distributing the TLS infrastructure is actually what made the whole Kubernetes the hard way was because of the TLS infrastructure you had to build, not because of the binaries.

The binaries are the least of your concerns.

Yeah, communication between nodes is like the simplest possible thing. The scheduler out of the box does what it’s supposed to do. It’s actually creating a proper, secured, and operational infrastructure. That’s resilience, too. Right.

That was the one thing I’m probably the only person who talks about Nomad who doesn’t have a hashicorp.com email address, and I’ve even got two Pluralsight courses on it, which are lightly attended just because it’s still early days with a lot of that stuff. But I’m banking that there’s more and more people are going to dig. I like that it has stuff that solves a lot of these problems. However, it just moves the problem goalpost a little bit to a different area.

At the end of the day for something like that, your development team or whatever is going to use a tool that should abstract out how the containers are operated. And so we see this, like when we use Terraform for our pipelines to do cloud provision because people are used to it. The cloud interfaces are actually pretty good, even though they’re heterogeneous. We deal with heterogeneous stuff pretty well because that’s what infrastructure is, but at the same time when we do it, we designed it in a way that doesn’t require Terraform to be the interface.

So if somebody says, oh, wait, I don’t want to use Terraform anymore, or HashiCorp becomes hostile. And Terraform isn’t a good utility. We could switch because at the end of the day, not whether you want to use Terraform or not, just like, Nomad versus Kubernetes. It’s not whether nobody cares, as long as your containers running and schedulable. So the idea is you want to break it back into what that unit of work needs to be done at that phase in the pipeline. And then you can start substituting, which is exactly what CI/CD pipelines do.

It’s like. Yeah. Look, I started with code. I needed to deploy it, whatever you got. And then over time, you keep adding new things into the middle of the pipeline or you switch tools and you’re like, oh, here’s a better security scanner. I’m going to swap it. And nobody. Pipeline keeps going just you swapped out a segment that does the job better. And that abstraction becomes a really useful thing to building all these systems. You have to have that connective tissue. You have to have a way to move state across a pipeline.

It’s been fascinating for us. Yeah.

The thing that I really want to pull out of this is you mentioned it. HashiCorp had to be example, right. What if HashiCorps becomes hostile? And we always have this thing like, even Kubernetes. People are like, oh, there’s such a vast group of people worldwide who are supporting Kubernetes. How can they go sideways? One word, Docker, right. To the point now where we’re questioning whether it’s even viable to maintain now that Docker desktop is licensed and it is entirely possible. Look, Mirantis was a good example, like the largest ever funding round in open source history, $100 million.

And I have not actually heard Mirantis mentioned, except in historical reference for quite a while. They’re doing stuff now. They were the Kubernetes company, and they are originally the OpenStack company. They’ve had to pivot and adjust, and the world has not necessarily been friendly for them. As a result, it’s tough. So Docker went through the same thing when you wrap a business around an open source product. And then there’s a divergence of belief systems in where it goes. We see now played out now and now they have to make it commercially viable.

And so all of a sudden, we have to unattach, like, this is the AWS risk factor of in Open. So Kubernetes, no matter how large it is, I have to think about what’s the risk pattern. This is sort of the lock in myth in a way, but as a methodology I need to think about preparedness.

If 2020 hasn’t taught us anything about supply chains, then you’re not paying attention, right. We have learned about physical supply chains. We’ve learned about going back to solar winds, about software and virtual supply chains. These are absolutely critical things that companies should be considering in how they look at building their software. And innovation is part of that supply chain. One of the things that we talk about with a cost of complexity is that when you build systems that are very complex, they end up being tightly coupled or having unseen coupling.

And that coupling actually makes it harder to innovate. Right. We just liberally talked about CI/CD pipeline, where you swap out something that works better. I could easily see, actually, it’s very pragmatic. So if you are, I’ll stick in Terraform, but you use us to provision with Terraform. We build a template, you like our templates or use whatever Terraform. But you could come back and say, you know what? I’m not using the provider that you’re using. The version I have is further back because it hasn’t been tested.

There’s a new feature that I have to use in a Cloud that isn’t exposed in the provider yet because they lag. And so it is essential that your automation right, for us, the pipeline has an extension point that says, oh, wait a second. If I need to make a call to an Amazon API or a Cloud API or another tool that’s not factored in. I can add that into my pipelines without breaking other things. Right. And it’s subtle, but it’s so important. This took us a long time to realize and longer to get right is that even though I’m using a completely standard process, all of our cloud interfaces use the exact same pipeline, but all of them have extension points.

I actually just gave this talk in ADDO, and I wish I had more time to show it, but each cloud has its own layer of, oh, these are the things that I have to do to service that Cloud through Terraform. Same actions that I run in Terraform. But the way you do the work not just plan differences. Like for Linode, you have to open a firewall port for Google Cloud, and it doesn’t work. Right. So you have to SSH and Ansible to join the machine.

Each one has some wrinkle, and you can easily imagine my company makes this additional call in Amazon that isn’t in a Terraform plan, or I can’t put in a plan. The sequencing is wrong. And so you’re like, how do I add in my unique wrinkle into that work? Normally you would fork it, you would have your own version of it, or you’ve read a Bash script. What we worked out with the pipelines that has been game changing for us is that there are extension points and how pipelines are built.

It allows you to infrastructure as code wise, extend the pipeline. And then from that perspective, have a very narrowly defined, oh, here is where I have to open up network ports in Linode because they don’t have a firewall in place like Amazon does. Same inputs, different actions or slightly different paths. But I can go back and see exactly how it was different than the standard path. And then we do that, like for Linux installs or VMware installs, that pattern of standard with known extensions plays out in incredible ways.

This is about protecting innovation.

Yeah. When it comes to drift management, and this is the other thing that we have to help them. Right? There’s provisioning. So stuff that’s particularly good at provisioning, and there’s stuff that’s particularly good at continuous configuration management and never the twain shall meet. This is part of the problem that we bump into. Now, where does drift management come into play now, in how you’re approaching this problem.

Drift management is tricky, and there’s a couple of ways that you can slice it. Are you thinking that the system is drifting out under the configuration, or are you thinking the actual?

First is the infrastructure itself moves with the right level of abstraction, the right level of change that can occur. I used to bump into this with just Terraform, like just a simple Cloud, a persistent Cloud workload, and all of a sudden for no real, particularly good reason. 22 days into me running my infrastructure, it gets reprovisioned because there is some drift, and Terraform sees it and says no, and it responds to my workload because it saw underlying drift in AWS, but I’m like, I wouldn’t even have noticed the workload was exactly the same.

But somewhere a host, an identifier, something changed. That was enough of a drift that it triggered a Terraform.

It could actually be a change in the provider that you’re using. One of the reasons now that you can lock the provider, so you don’t get an updated provider that then interprets a value in a different way. The way we deal with that is that our state information is designed to be incrementally, extended, and incrementally updated in very practical terms, like we embrace Patch as an API, as opposed to put, which means that we expect people to make changes to individual parameters or individual values in objects rather than expecting somebody to replace the whole value.

Anybody making changes to a Terraform state file, you’re like they’re doing it with tweezers, and they know they’re doing something dangerous and crazy, right? It’s a bomb defusal. Sometimes you have to do it, but you’re going to wear as much pattern as you can. And so for us, we know state changes all the time. So from a drift perspective, we work to item potency and not doing bad things and telling you, hey, this value isn’t what I expected. I’m going to stop and not try to fix it.

Rule number one with infrastructure, stop if something isn’t what you expect, don’t just keep going.

Works the same with fiber cables when you’re racking a server. If you feel resistance when you’re shoving the server back into the rack, you should probably stop and think about why there’s resistance.

We have this fight all the time, and actually we ended up adding retries in as a programmable option, which is nice, so I can be like, hey, this thing always fails. One retry and it fixes it. But by default, we don’t do retries, because if something didn’t go the way you planned then it’s wrong. Stop figure out what happened and fix it. And sometimes people are like, I don’t like that. We’re like, look, it’s much better to realize that it wasn’t what you expected. Fix it.

One further on that one, if you don’t mind Rob, the timeouts is also one of the biggest areas of issues I’ve seen with people that, just, like, manually blow out timeouts into their, Terraform is a great example. I’ll run exactly the same build. I like fully automated an EKS cluster. And everybody said, Why would you do that? It’s the simplest thing. Just use Cloud formation. Assume that I’m going to do it on Azure too with AKS. So I want to have a separate way. So I did it whether I’m self annihilating my belief in the world by doing this stuff all the time, but I do it and I build it and it runs.

It takes like 17 minutes to have a complete EKS cluster. Fantastic. And then I go on a webinar and I go to do it. It takes 42 minutes, because just some weirdness inside Amazon takes longer. And then if one thing flips beyond five minutes or ten minutes or whatever the default timeout is in Terraform, the whole thing just fails. And now I can’t just pick it up where I was. I have to basically unwind it. But now there’s timeouts on the unwind because there’s this weird interdependencies.

So you end up with this weird sort of like ladder of dependencies. That time can change the ability for a dependency to exist or not exist. That’s the one that I’ve raw retry. But even within that, just the infrastructure could take longer for some unknown reason. Something won’t reply back in time, and then a perfectly working manifest will not work the next time.

Yeah. And it could be something that is not actually, it’s a dependency chain that you don’t actually have a real dependency on or something that was misconfigured that’s never going to recover. What we did with infrastructure pipelines is we saw patterns like that where you’re like, using a tool to do a whole bunch of stuff, and because the tool is biased towards single source of truth or very atomic actions, Ansible’s like this, you build these men’s playbooks and you run them, and then they either work or they don’t.

I’m wondering if it’s impossible. What we have done is go the opposite direction. So when we build a pipeline, it actually decomposes into very small units. And a lot of times we’ll leave units in and just say this is a no-op because we know that in a different circumstance, you might want that in and you can turn it on later, or you can just make sure that it doesn’t impact the type of infrastructure you’re working with. That could be a whole our conversation about how subtly and powerfully that standardization works, but what we do because we end up running each component in what you described as a pipeline is that the system would actually go in and say, oh, I’m running cluster with 100 things in it.

Yay, the cluster or even multiple clusters are going to have their own management thread that you can track and see. And it’s a pipeline that’s doing its work. But it’s coordinating actions on separate pipelines running on the different pieces of infrastructure you pulled in. And then that actually. And this is one of the big things that’s coming in the next release that actually pulls in this concept of resource brokers, where instead of the cluster running the plan, the cluster actually talks to a system that is responsible for providing resources in a generic way.

So that becomes a generic abstraction point. And then that is actually what runs Terraform. You’ve got this place where with what you’ve been doing, you’re like, running a Terraform plan, and then it has to go to Amazon and build a whole bunch of resource and do all this stuff. And if someone gets stuck, that plan now is you’re locked there. And then the state for that plan is all of your infrastructure and unteasing that becomes like, all right, I got to unwind it and try the whole thing again.

What we’ve been doing is actually decomposing that into all the units, and then letting each unit be its own pipeline. And then that means that you could actually say, oh, I’m building a cluster. And here’s all the resources I got spun up. That’s great. And now here’s all the downstream work I have to do. And if something breaks in that one task, you might actually be able to fix that one task, reassert it, and then continue. And then the other things waiting for that to happen would get triggered when they’re supposed to trigger, which sounds more complex.

This is why complexity is so hard to describe. Pulling us a little bit full circle. Complexity is not bad. Everybody’s like, oh, I have too much complexity. I have to get rid of my complexity. I’m going to move everything to Amazon and just use their tools. Or I’m going to only buy from this one vendor. I’m going to use Terraform for all the provisioning. The Terraform doesn’t do some types of provisioning very well. And so they end up looking at it. And so what we’ve done is we’ve stepped back from and we started as a bare-metal automation company.

Complexity is not avoidable in bare-metal. You can’t say, hey, I don’t think I like raid controllers anymore, you shouldn’t use them. But I’m just going to buy giant SSDs and be done with all that. But the idea here is that you need to manage complexity. So there’s times when you decompose stuff into small units of work, because once the unit is a small unit, it’s reusable and you can track it. And if something changes, your blast radius for that change is small so you decoupled the actions.

You might have more moving parts, but they’re easier to manage as a unit. And this is the frame that we’ve really been helping people see. It’s not about eliminating complexity, it’s about managing structures, code. Go ahead.

I’m saying you’re introducing us the problem that we fail to talk about that. I see, because I, maybe decided to spend way too much time in business continuity, design and stuff. So I have a very systems thinking approach to all, like, always thinking about dependencies and interdependencies and lifecycle, including duration. Right. So what you’re creating effectively is long running ephemeral infrastructure. It’s the idea that you could rip and replace. However, we also know the pattern of consumption is not to use the stuff like ephemeral, like seconds long containers.

We do not, despite the ability to do so design applications and infrastructure to be treated like a bunch of cattle that we gun down in the field, apparently, which is whatever the reference we want to choose. Right. The reality is that I’ve got containers, I’ve got VM, I’ve got hardware that has to live much longer than what was originally anticipated to the point where things inside it. We’re looking for clean, deprecation options. You are creating the ability to have that long running yet ephemeral pattern so that you can ultimately get the best of both worlds.

So that when the time does come to, there is some kind of an underlying adrift of deprecation that needs to occur that you can look at it from the pipeline perspective, which is the right abstraction. The human abstraction is to treat it as a pipeline, and then life cycle and duration become variables that you apply to that pipeline.

And that’s what’s been powerful for us. Once we started thinking about things as these pipeline segments, it took me some mental lift because our CTO, he’d be like, no, you’re not thinking about pipelines. And I’m like, what do you mean? I get it, I get it. We keep taking me down the path further and further. And it is about the human understanding of how the pipelines work and the intent. The pipelines have intent and what constitutes a pipeline. When we talk about a pipeline, it really is like, oh, I need to build a cluster.

Okay, great. That cluster is composed of pipelines that need to build a Kubernetes worker or Kubernetes leader. And then the cluster’s job is to then connect all those things together. And so you end up with an intent, and then the intent gets piece together out of other pieces. And then one of the things that’s fun is you actually end up with standard units in that process. So when you build the pipeline, you might have a pipeline. That the difference between the hardware and the virtual pipeline might be a whole bunch of stuff in the middle, but all the stuff at the end is the same, which is amazing.

So now you’re just like, okay, I got the standard, I’m just dropping it in and it’s going to work. And then that falls what we have been trying to solve for a long time, which is how do we stop reinventing the wheel every time we have to provision a server? Right?

Yeah.

For us, it matters because we want our customers to be able to repeat success across every one of our customers. It’s a big deal. Right now. We have a ton of VMware deployment stuff for banks, media, and hosting companies and telcos and stuff like that. So we’re doing a ton of this. But we’ve gotten to a point now where they’re all using the same pipeline. It doesn’t mean they’re using the same hardware or the same network or even the same version of VMware. All those things are extensible, but they’re using the same pipeline.

And so when VMware changes something or we improve something, that pipeline can be shipped to them as a new code unit. Their extensions are against known points, so they can reuse that. And we’re seeing the same thing coming up in the way we’re doing Terraform work and the way we’re doing Cloud interface. So for us, it’s a customer to customer thing. But instead of our customers, it’s a team to team thing or a data center to data center or a Cloud to Cloud fix.

So you can be like, wait a second. I’m going to build a pipeline and use that on Amazon. Right. And then you can say, well, I need to use that same pipeline on Google. We know where the deltas are, that reusability is really important. But then two teams can actually share the components that they can share. That’s the thinking that’s so hard in this, right. The tools are designed. We were talking about the Terraform ones. Terraform isn’t designed for people to share their plans. Even if you use Terraform Cloud or Terraform Enterprise, it’s managing the stuff better and letting a team work together.

But the idea of everybody in your company using the same plan, that’s where things get more interesting from our perspective.

You’ve actually created a pipeline marketplace. In effect, that innovation in one area allows you to feed it back and then share it with the rest of the community, which is where the bring us back to perseverance, the seven year and beyond period. Right. Your vision is being realized now because you had this. What you needed to do is get people to come along for the ride. And then the network effect sort of begins to come in. It’s a really difficult thing, like customer one through ten to get them to see that down the road.

And so there’s some stuff you don’t know, right? As you said.

This is a matter of laser focus because it’s been super hard from the start. My co founder and I wanted to build a software, not a consulting or service company. And because what we wanted to be able to do, what we heard really clearly is nobody feels like they’re improving their business by installing RAID in BIOS configuration and laying down operating systems. Like I said, this is something that the industry should just have working. It shouldn’t be a creative exercise at any company, and there’s no business value created by doing it in a creative way.

But that’s the way it’s been for the whole time. I’ve been in industry, and we could have taken our expertise in those areas because we know more about RAID-BIOS configuration and PXE booting servers than really, I’d stand up my team against anybody but selling those hours would have done no good. And we walked and made it harder for our journey as a company we walked away from. Hey, can you just build something for me in my data center so that I can do this better and we would come back and say, no, that’s not what we do.

We have a software platform and a product, and it does it this way. And if that will benefit you if you adopt it. And we had plenty of customers, there was $1 million account that we were basically like, We’re not going to patch your cobbler infrastructure for you. We can’t pull the plug on it. It runs 100,000 servers and we’ll help you migrate it. But we’re not going to fix it for you because fixing it would have entrenched you in this bad pattern. And, yeah, that was from a startup perspective, being true to we’re doing software that’s repeatable patterns that can become a marketplace and have shared what we usually talk about is curated content.

That’s the value, rather than going up with people in parachutes into your data center and fixing it so that your 20 year old infrastructure designs can live another five years.

Something Cloud.

For you only. Like this is what we saw this with the application development pattern that’s with the team at the Cloud Foundry, they said, let’s go in as a pattern development and coaching program. And so it’s far more consulting heavy. And as a result, how many times have you seen a Bosch implementation lately because they didn’t lead with products and then use consulting as a secondary revenue stream? In fact, the best thing you’ve done is said, no, we could genuinely make money by putting consulting hours in and pulling together a SWAT team of people and growing this whole stable of consultants.

But what you’re doing is delaying the inevitable, and you’re empowering them to do things that are counter to the vision that you have to be able to do. End result, you survive, you persevere. And on the other side of it, people are like, this is it. It actually works, and it’s always worked. It’s just that now they’ve got social proof and customer proof, right? The NASCAR slide is now something that people can, okay, well, if Company X is doing it, then I better get on this train business value I almost wanted to do for any super technical startup founder.

I’m like, you almost want to say do a spoof like a B of A quarterly investor call. It’s never like Jamie Dimon getting on saying yes, this week we updated the RAID firmware on all of our servers on our private Cloud. And so it’s gone very well. We’ve got a strong group of folks that are working on it, like, now they’re talking about business outcomes that they’re doing, and then this stuff that has to happen, you got a choice of how you’re going to let it happen.

Are you going to let the Cloud drive you or are you going to create the Cloud and you’re delivering. This is what Alex Polvi talked about, like, Giphy, right? You’ve done it.

Yeah, that’s right. It’s one of those slow, methodical things focusing on for us, customer autonomy at the end of the day, but, yeah, it’s hard. It is definitely a journey. It’s fun to watch customers pick it up, by the way and then see it spread virally inside of an organization, which we typically see that. Or we had a customer like, all your stuff was working great. We usually don’t have any trouble with any of your stuff, and they’re like, but we’re seeing something. And a couple of hours later, they’re like, oh, yeah, we had some configuration on our end, but you help them through that.

And the fun thing is when they’re autonomous in that perspective. But it’s the opposite of what a lot of people are doing right now. They’re all telling you to outsource. They’re all telling you to manage service. We’ll take over. We’ll run your data center for you. The hedvig of hey, if Kubernetes is too hard for you to understand, let us do that for you. It’s a good business model for people, right? Yay. But we saw this with OpenStack, and it was really bad. The idea that our software is too complex for somebody to learn how to use.

So just let us take it over. That’s our new business model as we’re going to keep it complex so that you don’t have to worry about it. The industry isn’t going to grow. That’s not a growth model for the industry, especially with edge and things like that coming in. Right. We should have the underlying hour on this of thinking through, what would it look like if we had small data centers in everybody’s house or in municipality? And what would it look like to make that stuff go?

That’s game changing all this cloud stuff. It’s great. It’s amazing. It’s powerful, and people should use the heck out of it. But at the end of the day, be careful about the autonomy that you’re losing, in a lot of cases without even realizing it.

True that. Tell you about my one close in complexity and I don’t mean to make fun of the folks at Microsoft because Microsoft Ignite, of course, is happening as we’re recording. This is actually fairly rapid that’s going to go live. I saw the Tweet and it had this thing. It was like as your arc deploying Kubernetes on vSphere, I was like, wow, it’s just a list of things that I would love to do as a science experiment, but nothing I would want to run into production. However, there’s a thing, so bless them for gluing together a lot of bits, but there’s a reason the patterns are out there.

In the end, one thing that we need to do is do Cloud as a practice, treat infrastructure as commodity. And like I said, it’s beautiful to see it realized in what you’re doing. And the cheat is that as we close up this part of the podcast, I get to get a real live demo with this stuff, but we should definitely get you out more and more. Now you’ve got such a fantastic audience as well. Cloud 2030 is amazing. It’s really wild to see how that’s continued to gain momentum.

And at first I remember telling people that I know Rob Hirschfeld. It didn’t take long because your reputation and the respect you’ve gained in the industry for asking the right questions when sometimes people are a little afraid to hear the answers, the fact that you’ve done it and people realize it’s for the solution, not just the guy that asks the questions.

You’ve just defined what Cloud ’30 is all about in various succinct terms. It’s asking questions that we’re sometimes afraid what the answers will be.

And it’s great to see that more and more as I bump into folks, I say, yeah, this needs stuff in RackN. They’re like, oh, Rob Hirschfeld, right. Yeah. All right. The association is there and the respect is earned in what you’re doing, which is cool. So I’m glad that one day we’ll do some more work together in the world be, it would be neat to pair up on more stuff like this. It’s been great. So with that, Rob, what’s the best way if people do want to find out more, of course, about RackN, Rebar, all of the things? Cloud 2030 we’ll have links for folks that wanted to get signed up and how do they reach you?

I am very consistently Zehicle, Z-E-H-I-C-L-E. Goes back to my electric car days pretty much everywhere. Some reason people don’t like Zs and handles, but I’ve been very happy with it. So you can find me on Twitter and everywhere. I’m very active on Twitter and that’s a great place to interact. RackN is rackn.com and at this point that’s the best linkage point to get to everything Digital Rebar if you’re interested. And the Cloud 2030 is the2030.cloud is the website for that, so you can catch up on episodes or see what the schedule is.

We stay about four weeks ahead if you want to share pick topics, but just drop in and it’s a discussion. It’s a hallway track. They’re just amazing.

Yeah.

That’s what we desperately need.

And the funny thing is, the people that you meet in that hallway. I’ve met them in other commercial opportunities. Now it’s hilarious to see that it really and truly is a small world. And this is why you see repeated voices come up. Then you see them on Twitter, and then you see them in other engagements. This is community, the real true community. This is not about patting ourselves in the back because we built one thing. Well, it is really about finding people that are in a community of practice.

We are practitioners of things. I’m not team OpenStack or team Kubernetes or team VMware. I am team people doing fantastic things with infrastructure and applications. And as a result, community truly transcends the ecosystem that we maybe were born in or lived in at the time. It’s kind of cool to see it all.

Yeah. After 20 years, I’ve seen these products come and go and come back again. Patterns and the people. And sadly, some of the problems that we solve haven’t changed too much.

Was the old joke, right? They said that every time we’re building a better mouse trap, at least that used to be the design of build a company, build a better mouse trap. And there’s, like, more patents for most traps than there are for, like anything else in the world. And in the end, you go to Home Depot or Lowe’s or wherever you happen to go to Home Hardware, if you’re Canadian. Then what do you find? A slab of wood with a spring on it and a place to put cheese?

The most simple possible thing is really the best thing for it. But, hey, we’re going to create disaggregated hyper converged mouse trap infrastructure somewhere. And in the end, just grab a piece of wood.

With blockchain.

Exactly. Awesome. All right. There you go. Rob Hirschfeld, 200th. Thank you for celebrating 200 amazing and fun conversations that I hope to have many more. So I’m going to have you on for 300th. Just give me the heads up right now. So mark your calendar. However long it takes to get 300 more of these. We’re going to do this again.

I’ll be in my walker, and we’ll make it happen.

Right on.