Voices of Video

From Broadcast Beginnings to 8K Streaming Prowess

NETINT Technologies Season 2 Episode 3

What if you were among the first to pioneer live streaming, shaping the future of digital content delivery? Join us as we chat with Stef Van Der Ziel, the visionary behind Jet Stream, who takes us on an incredible journey from the early days of internet broadcasting in 1994 to becoming a powerhouse in the streaming industry today. Stef shares the remarkable transformation of Jet Stream, recounting how they managed to achieve a 40% TCO reduction for clients and ensure 100% uptime, while also integrating with nine different CDNs to support 8K streaming. Discover the secrets behind Jetstream's technology, which claims to be 430% faster, and how they revolutionized customer provisioning with their proprietary software, Video Exchange.

Stef's story is not just one of technological advancements but also of strategic growth and adaptation. From building temporary CDNs for large-scale events to launching their own Jet Stream Cloud, this episode highlights Jet Stream's bold transition to a SaaS model, delivering unparalleled speed and reliability for high-definition streams. If you're fascinated by cutting-edge video streaming technology and the visionary minds driving it forward, this is an episode you won't want to miss. Stef's insights and experiences provide a captivating glimpse into the ever-evolving world of streaming, showcasing how innovative solutions can redefine an entire industry.

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

Speaker 1:

Voices of Video.

Speaker 1:

Voices of Video. Voices of Video. Voices of Video. Today, I am joined by Steph Vanderzeel, who is the founder of Jetstream. Van der Zeele, who is the founder of Jetstream, and let me say this about Jetstream If you haven't heard of them or you're not familiar with the company, they have some really interesting numbers that will probably pique your attention, and some of these I know Steph is going to be talking to us about today. But imagine this 40% TCO reduction is what they're able to give their clients, users of their network, of their platform. They have 100% uptime. They have been GDPR audited, which is very significant, especially if you're streaming and delivering content in europe. Um, they're 430 percent faster. Now this one even I had to raise my eyebrow a little bit and go, hmm, let's see how they do that. So I know steph's going to get into the details about how the system is built and, uh, and and what that actually means. Uh, they support 8K. That's pretty unique. And they have integrated with nine CDNs. And so that's a little overview of Jetstream and Steph, welcome to the Voices of Video. Hey.

Speaker 2:

Mark, thanks for having me on your show, really cool.

Speaker 1:

So I have to ask you a question really cool, so I have to ask you a question, did you?

Speaker 2:

really invent streaming, as you claim? Tell us the story. That's the claim. Yeah, so we started producing our first live stream in 1994, so that's quite early in the internet days. So most people did not even know about the internet and we were doing live streaming and it wasn't full hd or hk or 4k, it was a small postage, size, was one frame per second, but it was live and we don't know anyone else who was before us. So, yeah, we, we were the first, as we hope, to be able to claim give us a short overview of the company.

Speaker 1:

Um, how did you jump from 1994 to 2023? And then we'll get into talking about, I know, a very interesting use case.

Speaker 2:

Of course, 1994, it wasn't a company. We started the company, I think, in 2003. After pioneering for many years with live streaming. We broke down the internet here in the Netherlands by having so many viewers, so we overloaded the internet here in the Netherlands by having so many viewers, so we overloaded the internet. And so in 1997, we built our first CDN, which was a temporary CDN, to do a large-scale live event, and then we broke them down after the event was over and then for the next event we built up a new CDN. That's how we did it back then. So we were first in production. We were really going with cameras and encoders to locations doing the entire streaming.

Speaker 2:

But at one point I said that's not really scalable. Let's build a streaming platform. We have so much knowledge and experience now. Let's build a streaming platform as a permanent platform. And that's what we did, and I think the company grew 1,800% in the first five years. So it was boom.

Speaker 2:

Streaming became hot when people got broadband. Basically that's how it started. And then we started to develop our own software because we were running, you know, windows Media, real, icecast and QuickTime streaming service and manually helping customers. But because of the size, we needed to develop something that automated provisioning, customer provisioning, live stream provisioning. So that's what we developed A piece of software we called it Video Exchange because we could exchange videos with customers, work for orchestration stuff and edge service deployment and all that stuff. And it was really cool technology and we started to license it out to telcos so they could build their own cdns.

Speaker 2:

But at the, after a few years, we decided to go sas all the way, just host our software, um, uh and and scale it up. And where we are today is we. We run our own cloud it's called jet stream cloud, of course and yes, it's 430 faster. We we tested uh customer streams, uh, full hd streams over multiple clouds and cdns against our own cloud and streams uh, you know the video chunks of, you know the one, two or four seconds, for instance. Yes, the the download time from our cloud is uh, more 400% faster compared to regular clouds because we're not a generic cloud, we are an optimized streaming cloud, so we don't use virtualization. The network package is optimized. We have hardware, software integration with the caches, so everything is done for tuning to be able to burst out those video chunks as fast as possible. At one point you can ask what's the point of delivering video chunks at 50 speed instead of 25 speed or 5 speed. But once you go into the 8K territory, that's going to make sense.

Speaker 2:

You want to be able to burst out very large pieces of video to those users and another benefit is that when you can burst out those video chunks at the highest bitrate to the users, faster the video players will analyze how fast they get the chunks in. So the chances are higher that you go to the highest bitrate in the ladder compared to other clouds and CDNs.

Speaker 1:

I want to get into a use case that I know that you're building solutions around and deploying. That, I think, is pretty fascinating. And it's this whole idea of a localized OTT network. You know, when I first heard the phrase from you and of course we're collaborating on a couple projects I drew my own idea of what it actually meant. But it turns out there's quite an interesting application. So you know, first of all, tell us what is a localized OTT network and you know, why does it exist?

Speaker 2:

it's. It's funny because you know, when you talk about about OTT, you talk about world domination. Right, you have Netflix and HBO, conquer the world and reach every piece of the continent, and that's what we try to help our customers as well, because we know we have all these CDNs on board so we can deliver into every continent and if one CDN doesn't perform, we can switch over. But there are some gaps there. I mean, this is about global streaming, but there's another market and that's localized markets. What about real local, remote areas like compounds in the desert? Um, you know, you can have like a thousand or two, two thousand households in a location, but the backhaul connection can be really poor, but still those people want to be able to watch a decent OTT stream. It's not just that use case, it's also offices, hospitals uh, also offices, hospitals, hotels, holiday parks, university dorms. People want to watch television on their cell phone or iPad. But if 10 people are doing that, that's fine for the local network, but what if thousands are doing this? And what if the backhaul isn't really that strong to even get you know like 80 quality, high quality, full HD streams in? So then you need local encoding, local transcoding and local serving and that's what we developed. We call this deep edge OTT. It's an appliance and we build it together with your technology, with NetInt, and it's a one reg unit so you don't have to buy like five huge servers anymore with CPU encoding. It's a hardware accelerators with our software and it's actually. It's got three components. You know it's. It's a live encoder, it's a transmuture to create hlesson dash and it's a deep edge cache. So it's a local cache so you can actually use it also to serve out the streams to the local users. You don't need an extra cache or CDN there.

Speaker 2:

And the challenge in those locations is that it's not just a remote location with a poor internet connection. Most of the time there is not that much local physical capacity and power as well. So we needed the solution. We actually had a customer come to us and they said we have these compounds and gated communities in our countries and we need to have local IT TV there, but hundreds of stations per location and if you talk about hundreds of streams, you need to convert on a CPU basis.

Speaker 2:

You talk about deep investments in hardware and CPU power. That's right, but also in energy consumption, and the solution we created can convert 80 live HD channels in just with this one new chassis, which is pretty cool 20 faster, uh, more efficient compared to cpu encoding and it can serve up to 10k you know, 10 000 concurrent viewers with justice machine, which is pretty cool. And I think one of the most important usbs is savings and you know we've been talking to these customers and they said, yeah, but if we need to buy this, this hardware, all these servers of the cpus, there are no business cases. We will spend like 80K per location and the energy bill will be higher after so many years.

Speaker 1:

Yeah, that's right.

Speaker 2:

I mean, with the current energy price, that's really a challenge. I think after three or five years you're spending more on energy than on physical encoder. That's a problem. Energy than on physical encoder, and that's a problem. So the cool thing about this solution is that I'm just reading it 72 lower kpx, so per stream you spend 72 percent less on investment for encoding the death stream. And that's just the hardware. We did not even take licensing. You know software and go to licensing into into this and it's using 89% less power compared to CPU-based encoding, which is also really impressive. And the physical footprint is just a one unit rack unit server compared to the stack.

Speaker 2:

This is what the interface looks like. I'm not sure if you can see it, but on the left side you have a config page where you can create those channels. You know you can get MPEG transport streams in, or SRT or RTMP, or you have a config file adjacent which you can remotely deploy on the server to update the channel configurations. And on the right side it's got a built in video player for HLS and Dash so you can preview if the stream is OK, and there's a URL which you can publish in like a middleware service or a portal or wherever you'd like. And normally if you start trading off quality for power consumption or cost, you will go down in quality. But we checked it against the CPU encoded we already have, and the hardware encoding quality was up to par with the software encoding, which is really cool, and it can go up to 4K encoding. So yeah, you can get ultra high definition quality out of it. And also addressing it by standard. We will use H.264 because that's widely adopted, but it can also do H.265, which is even more efficient, and the latest is really small. We've seen sub three frames latency in the transcoding, uh, while with cpu encoding you can talk about seconds. Uh, you know, that's that. Yeah, of course, and of course with software encoding you can do some tuning, like forward checking, but then the latest will just explode. Um, so yeah, I think this is a really cool solution for ott live solutions. Uh, and there are actually three components in there. The first one is everything we do we build it in Kubernetes and Docker or container environments, because we run our own cloud and everything we build on our cloud is contained by Docker. So it's, you know, you have isolated self-restarting services, so it's low maintenance, high uptime, and we have our software encoding services running on top of that, so they're running within the scuba needs environment. And then we use the net end acceleration cars, so you know the asics, to do the real high performance encoding, and that's a great combination.

Speaker 2:

Um, this is the schematics. I can put it on full screen for you so you can see it better. I can put it on full screen for you so you can see it better. And on the bottom there's just an open x86 chassis with 10 of those x86 cards and I believe they're just using like 7 watts per card, so the energy is really low. And then on top of that we run, of course, the operating system in Kubernetes and then every transcoding process is an isolated container. So if one crashes it doesn't bring down all the other channels, and if it crashes the Kubernetes system automatically restarts it within the buffer size of the user. So you don't even see that there was a restart or a crash or whatever. So it's really rock solid.

Speaker 2:

And then we have a BAS VAN DIJKERDENBUR solid, very stable, yeah and there's an edge cache running on top of it. And then, of course, we have some API and GUI stuff running on top of that. So it's like a hybrid solution between hardware accelerators and this software stack to do this stuff. And this is basically the process. I can put it on the full screen as well. On the left side you have transport streams MPEG-2, mpeg-4, or SRT or RTMP coming in. We scale it in software and then we decode it in hardware and encode it in hardware, and then the packetizing is done in software. Again, the chunks and the manifest files are stored on the local storage with a DVR window if you need this, and then there's an edge cache Actually that's two for redundancy reasons, with a lot of streaming optimizations like anti-thundering you know, thundering protection, smart caching, stuff for live streaming.

Speaker 2:

Yeah. So it's one solution that can just do this, and what I like about it is that you can cluster it. You can have multiple of those servers in one site, because if you go from 80 to like 160 channels, you can just add a server and it will just automatically load. That that's the beauty of Kubernetes. It will just say oh, there's a new server, let's just share the load of the encoding but also share the load of the viewers over the machines.

Speaker 2:

And if you put in three you have a high availability service, so one machine can entirely break down but everything will just automatically keep working and that's nice. I mean you don't want to have an operator in a hotel or in every hotel facility that you have to restart the machine.

Speaker 2:

Yeah, that's right, um, so yeah, it's got scalability and high availability. And then we also thought about orchestration, because you know, if you, if you have one location, that's fine, but what about if? What about two or five or 20 or 100? We can monitor it from a central facility, from our cloud, so you can have all, have all these satellite encoding locations. And then centrally, you can deploy the JSON file for the configuration. So if you need to add new channels, you don't have to call the hotel and say, hey, guys, just add this channel and put in this RTP link, whatever. No, you just update the JSON file, push it and boom, there is another channel.

Speaker 1:

That's the idea, there it is.

Speaker 2:

Wow. And then we thought, okay, we can go even further, because you know the machine is logging and creating log files and we can actually push those log files as a stream of data to our cloud and then process it to statistics. So you can have… Of course, yeah, of course, elastic system running in this cloud, processing and chunking, chunking, going through all the log files and making sense out of the session. So then you can have reports like how many views do I have per location, what's the most popular channel, what's the average viewing time and from which cities or countries are people watching those streams? And all the data is there as well. So you can think of kind of know business models that, okay, which channels do I want to pay for because they're not watched anymore, but the channels get more. You know, you can build those kind of cool things.

Speaker 2:

And finally, last but not least, we can actually put it into a mix. So let's say you have like two or three cdn serving out capacity. We can add those machines, uh, at those locations to have extra local capacity, so deep, deep edge capacity in certain locations. So if traffic comes from a certain hotel or from a compound or from you location like an office. We can recognize down to the IP address that the traffic is coming from that location and then prioritize traffic through the deep edge. So it's not just the streams from the local machine that it can serve out, it can also become an edge for the cloud and for the CDNs, so we can have localized edge server running there. We can also upload internet streams and on-demand videos. So yeah, that's the idea.

Speaker 1:

Super powerful and there's a lot to unpack in what you just presented. Thank you, a fabulous overview, by the way. One of the things just to point out and I think everybody got this but we're talking about 24-7 live channels, so this is like linear streaming. This would be impressive if it were even file-based VOD, but do you have the notion that you can support both VOD and linear live On the same system? Do you have a different way that you handle that? How do you handle file-based delivery? This?

Speaker 2:

deep edge OTT solution was built for live, so the use case is we have so many live channels from satellite or cable, we need to OTT streams like HLS and Dash, sure, sure, sure, but of course for the transcoding part. Of course we also do VOD transcoding in our cloud.

Speaker 1:

I mean we have customers. That's right because of the cloud DVR functionality.

Speaker 2:

It's not just a DVR, but it's also live to VOD. I mean we have customers recording live streams and then offering them as on-demand videos. Or we also have a lot of customers in our cloud who are not doing live streaming at all. They just have an on-demand video library like an OTT server or a marketing video library for an enterprise and they want to upload these videos and they have to be transcoded as well, and we use the same technology stack for this.

Speaker 2:

So we started with software. You know we have our own cloud with the cpus and we thought, okay, if we build this transcoding software, we can utilize the cpu power in the cloud, but we will switch to hardware transcoding. We will at least, yeah, a lot of the transcoding will be done by hardware in the future, because it's so more efficient and scalable compared to CPU-based encoding.

Speaker 1:

I was going to ask you about your journey from CPU and software-based encoding to hardware. Maybe you can give some more insights into what really drove that. In our talking to the market, to the ecosystem, we find the usual. A lot of people just say, well, it's just purely cost, it's just our operational costs. But there are some other factors as well and I'm just curious for you was it just purely an economic decision or were there some other factors that drove you to even explore hardware and then ultimately land on Asics?

Speaker 2:

As a software house. Of course, hardware is like we don't like appliances, we don't like Exactly.

Speaker 1:

Yeah, you're virtualized, everything's virtualized. So it's like well, what are we going to do with hardware?

Speaker 2:

Actually, like 10 years ago, we went to a show in London where a lot of vendors tried to sell their appliances to the telco industry and build CDNs at Cisco. With a stack of appliances it would cost you like a million euros to build a stack for just a pop and I went on stage and I said hey guys, I have news for you. The appliance is dead because now we have software controlling everything and software edges, so forget about this. So one half of the audience was turning white because those were the vendors, and the other half of the audience was like, okay, let's have a meeting, yeah, let's talk, but you know things flow back and forth. So now we're like, okay, there are some limitations in software. Um, if you, if you talk about scalability and cost. So our arguments were not just cost factor but also about skill, which in the end, of course again is cost. Um, yeah, that's right, yeah, but to, to, to be able to do more vot transcoding and more especially for I mean, vot transcoding is not even that hard because it's not linear, it doesn't have to be real time yeah, I mean, it can be faster than real time, but not necessarily sure but for live streaming to have capacity for customers to say I need to start a live stream right now and it has to be transcoded to no four or five bit rates. Yeah, you need to have a lot of headroom in your cloud to be able to do that.

Speaker 2:

And we were like, okay, if we do that in software, we will have to buy a lot of equipment which will just be eating dust 90% of the time and it will use a lot of energy. And we want to be as green as possible, not just with green energy, but also with a small footprint. And so we were like if we use these cards, we will have much better scale and we don't have to invest and overinvest into infrastructure for that. You know those few days a week that customers are peaking with a lot of concurrent live encoding streams. Today all the internet streaming basically is H.264. As soon as you need to go to H.265, which is not that popular, but if you want to go to AV1, for instance, which we think will be a dominant encoding format in the future, you have to throw a lot of CPU power against it for encoding, which basically breaks the business case for AV1. With hardware accelerators like the ASICs, that's instantly becoming a feasible business case.

Speaker 1:

So a very interesting point that you make is that there is and we talk to, you know many of our customers and you know companies who are seriously evaluating hardware, particularly ASICs, and you know, behind closed doors, you know a lot of them do reflect that. You know, three to five years ago if we'd approached them they would have just flat said you know, go away. Like hardware, you know, like you know, good luck. And so they're. But you know, the point is, point is now they're very, very engaged and very leaning forward. And also what's really fascinating is that it's not that it's just kind of a temporary situation of, for example, energy costs. You know, or even you know, okay, the CEO is mandated, you know we need to trim 20% from the budget and so well, how are we going to do that? Yes, those things sometimes start the conversation, but there's so, so many more benefits. And, like you point out about the advanced codec, the next-gen codec support, the industry has just gotten to a point now where Codex, the next generation, if you look at the complexity of VVC, let's say just to you know, but AV1 as well, so VVC over HEVC or AV1 over VP8, vp9, you know the VPX codex. It is so significant. And when you factor in, you know, even if you're able to get which on day one, you never get the 50% promise savings in bitrate. You know it's more like 20 or 30%, but in time the codex get optimized right. But so even best case scenario let's say on day one you could get 50% savings the added compute cost through the complexity of the codec negates all those savings and in some cases and then some.

Speaker 1:

And so it's this real conundrum that the industry is in, because on one hand, you know A we want to always be moving to, you know more efficient codecs and want to be pushing our bit rates down. Resolutions are only going up, customer quality expectations are increasing and yet for many services and platforms they're kind of stuck. But hardware, in particular ASICs, are the breakthrough. You know for that. And so, yeah, we are seeing a really phenomenal. I know of two. You know file-based, very high-profile premium streaming services right now that are on the cusp of deploying hardware. And it's in a use case that you know, I think a lot of you know traditional streaming engineers might say, well, no, you can use CPU for that. And guess what their conclusion is no, we can't.

Speaker 2:

You know, if you have to encode a film once and then you can stream it to millions for years, then of course the business case for high cost encoding is better. We have a lot of customers in niches and they need very high quality. They don't have these large audiences, so then there's no business case for the encoding. If you do it in software so that's right there will still be a market for software encoding. I mean the tuning you can do in software will be better. We also, I mean when we first tested those cards, the first thing we did was look at the quality. Is it really good? I mean, if the quality is average or poor, then it's like substandard and we wanted to have a high quality encoding. So we were very pleased with the quality. I mean, if you compare it to GPU encoding, that's bad compared to CPU encoding.

Speaker 1:

But with the.

Speaker 2:

ATX. It's in the realms of CPU-based encoding, which is cool.

Speaker 1:

Just to reinforce that point, because you have seen this data but perhaps our listeners have not, or not everyone has. So you know, what's also so important when we're talking about software versus, you know, versus hardware is, like you said, if you have the benefit of encoding a file and then serving it, you know literally hundreds of millions of times, and you know there's a few platforms in the world that have that benefit. But again, you know, pay close attention to what I just said there's a few platforms in the world, meaning the whole world. There's like a few. You know Netflix, of course, being at the top, and you know there's the other ones we all know.

Speaker 1:

But what's super interesting is is that when you go to live, when you're talking live and you're encoding in at, you know at like slow or at medium, or at fast or faster or fastest, and you know the various levels, subjective quality metrics our live performance is on par with X265 medium and that's live.

Speaker 1:

And there are very few services that could afford to run X265 on CPU at medium. And you know, just because the compute, you know you're going to need a lot of cores to do that and so you're not going to get the density. So true, if you're only running a single channel, again, I suppose you could say, okay, fine, I don't care, you know, I'm just going to put a big AMD 64 core machine there and you know and everything's good, right, um, but what's super interesting is is that the asic is able to achieve medium, uh, medium x265, medium speed quality, but live and provide a 20 bit rate reduction on top of that. So not only can it reach the quality, but deliver 20% fewer bits at the same time, which is just and is that the H.265 encoding as well in the ASICs?

Speaker 1:

That's right. I'm talking about HEVC, yeah, so I also think that's an important reference point. But let's get back to talking about. You know what you built, so what is next? You know. Talk to us about this is a very interesting use case. I know about the types of projects that you're deploying it into and the regions of the world, and it absolutely makes sense. And you know, I know, you're going to do very, very well with this solution. But you know how are you going to grow it? How are you going to expand? Like what's on the horizon? Are there some other use cases, some other applications that you know might either be well-known or maybe also novel? You know that you're targeting.

Speaker 2:

We never foresaw that we actually would develop an appliance, because that's not our business. Our business is SaaS, right. But when this customer came to us and we said but this is an interesting use case and we had so much technology on the shelves which we already were using in our cloud. So basically we just you know, it's a micro cloud running in an appliance, that's what it is. So it didn't take us that much time to develop this product technically and as a product. So we now have this customer starting to test it. They started testing the software solution. Now they want to test the hardware solution.

Speaker 2:

I don't want to full-blown start selling it. I want customer feedback. I want them to operate it for quite some time and then see how they like it and how we should tune it to have a good fit for them. And then we will make lists of potential customers in markets and maybe we will do it and sell it ourselves or we will find a partner, like someone who already has a sales force, into certain markets. Some joint marketing with you guys would be interesting. Those are the things we would be looking at. To start selling this. We're not interested in selling hardware, right? So we have the software stack. So we will sell it as a turnkey solution, but we will primarily focus on the software license for it.

Speaker 1:

What parts of the world are you primarily focused in, or or are you literally selling globally?

Speaker 2:

uh, just, jesse, miss. European company with the european focus. So our customer base is in europe, uh, but our audiences, of course, are around the world, because you know we have the. Cdp. We can deliver to out the world, so, but our primary customer base is in Europe.

Speaker 1:

You know, I think it might be interesting to talk about from your perspective.

Speaker 1:

Challenge that, when you talk about the video distributors, the platforms that are, you know, let's say, a notch or two or even a few more notches below, like, like, like a Netflix of the world or or the you know pay TV platforms, you know some of the large, you know what satellite or cable or even IPTV, and you know it's interesting because there's this tension between the you know, content licensing costs, the technology, the business model, and sometimes it's hard to get those kind of you know proportions correct and I've seen overinvestment in all of those areas. Right, but they're sort of like levers. You need to get them in the right position or else it's going to be really difficult to make money. You have been in the business for a long time. You have seen the technology expansion and growth and you've seen the shifts and the changes, and you know so. Do you have any insights about, either, where you know the state of streaming and the technology stack is today where it's going, you know? Is there anything that you'd like to share?

Speaker 2:

Technology-wise or business-wise, or both.

Speaker 1:

Both is fair game. You know more and more. It's my encouragement to engineers and people who primarily live on the technical or the product side to begin to get an understanding of the business. You know, because I think it makes us better engineers, it allows us to build better products and then you know always, of course, you know I think there's a benefit for the business folks to you know, to have some understanding of the technology, but something that I think our audience is primarily going to be engineers and so if you have insights on the business side, go for it.

Speaker 2:

It's interesting If you look at the business trials in the O2T market, like the Netflix and HBOs, hardly any of them are making money right, and that's a challenge, of course. I mean there's a discussion going on between the telcos and the O2T companies. At Mobile World Congress, you know companies raised the question like why should not the OTT companies? At Mobile World Congress, companies raised the question like why should not the OTT providers pay the telcos a little money Because they have to keep scaling up their infrastructure? I posted something on LinkedIn a few days ago and said yeah, but if you look at this industry, you have a lot of last leaders, not just in the OTT space, but also in the vendors. I mean, name me a vendor that's really profitable right now. Not that many are, and actually most are last leading and burning through a lot of money. It's both the OTT vendors and the technology vendors are burning through a lot of money. So basically they're subsidized by investors who hope that one of those companies will eventually, you know, get better rates or better revenues or be bought by another company. That's not really healthy. It's not sustainable. I think there are too many vendors in this market space and I think their expectations have been too high. Some of those companies cannot meet their expectations by the investors, so they're getting into trouble. That's bad, of course.

Speaker 2:

And another interesting observation I made was you have all these telcos, and the telcos are in a crowded, saturated market, but they're making money. And the telcos are in a crowded, saturated market, but they're making money. And the OTT vendors, you know the Netflix and HBOs are also. I think they're basically saturated. I mean, there's so many offerings and they have all the audiences and I don't think they can have more customers, as they hope. It's getting saturated, but none of them are really making money. So why would then those go? The ott companies start paying the telcos who are, who are actually profitable and those guys are not.

Speaker 1:

So that's an interesting. That's an interesting perspective, and I'm sure mobile world congress was not that popular.

Speaker 2:

I tend to agree with the basic premise, though, so you know, if you look at these these hyper scale ott2D vendors like Netflix and all those guys they have the budget to put in edge caches within their telcos to negotiate great private hearing deals, and that's great for them. So I don't think there's a real problem. There is not a challenge For newcomers in the industry who don't have the deep pockets and don't have the skill. You won't be able to negotiate the same deals with the telcos, you won't be able to get your edge servers in there, and that means that you will get two sides of the industry Some really high-large guys with great performance, great capacity and low cost, and challengers with all kinds of thresholds to get into the market and I think it touches the discussion on net neutrality. So how far should the telcos be forced to also open up their networks for those guys?

Speaker 1:

What about any insights on the technology side? And it doesn't necessarily have to be codecs or encoding, but is there anything that you're seeing, any trends, any requests that are coming in from these telcos and these operators, even if they're tier two, tier three or working in smaller markets? That might be interesting.

Speaker 2:

The markets we work for are medium-sized customers typically, who need very complicated workflows, and that's what we solved with what we call our mix solution, where you can start an easy way and then go under the hood and start tuning and tweaking features, which you cannot do with regular VD platforms, and then, if you go to the expert level, you can plug in your own transcoders, your own players, build your own workflow yourself using your own players, you know, build your own work for yourself using our, our stack of uh, of workflow appreciation tools and streaming features. Um, that's cool, uh, that I, I that's. We get a lot of positive feedback on this because a lot of people who enter the streaming industry and they're not necessarily have to, they have, they not necessarily, they are otg providers can also be like an enterprise who needs to do some online, online video stuff, or an e-learning platform that wants to do something with video or live streaming. Um, there are two choices in the market. Either you go to a video platform and that's easy, you know. You sign up for uh, not that much money. You, you can upload your videos and they do everything for you. You get a video player and you can publish it on your website.

Speaker 2:

But a lot of those companies get stuck there because they cannot optimize their encoding quality, they cannot optimize the player, they cannot optimize the multi-seeding distribution, they don't get access to the data that they need.

Speaker 2:

So they get stuck with those platforms and the step to go to building your own stack of technologies is extreme, because then you start to have to hire cloud experts streaming experts who go to Azure or AWS and start configuring all these modules and sticking everything together. It will take a lot of time, a lot of money to build that, and then you have your own home-grown streaming platform and sticking everything together will take a lot of time, a lot of money to build that, and then you have your own home-grown streaming platform which does what you need. But what about tomorrow? What about you want to go from H.264 to AV1 or whatever? You're stuck again, because that's the problem. So that's why we also claim to have this 40% cost reduction. It's not just in traffic or transcoding costs, it's also in operation costs. It's something that people underestimate how much time it takes to build and maintain a streaming platform and how much expertise you actually need in-house to do that.

Speaker 1:

Just because you can grab an open source library. You know which is super powerful, right, and I think it's a wonderful thing. You know, it's great that we have FFmpeg is developed, you know as it is. And x264 is, you know, let's face it, it's an amazing encoder, it really is. And x265, great encoder. So this is all wonderful.

Speaker 1:

Do have smart engineers and they do have the talent to be able to build it. So it's not that they can't, but it's that maintaining it for the life of the service, that's the part they miss. So it's one thing to build it, have it work today, have it work tomorrow, have it work next month and even for the rest of this year. It's another thing to have it work tomorrow, have it work next month and even for the rest of this year. It's another thing to have that same service rock solid. You know, uh, in in in 2025, you know when maybe our user base has scaled 10, 20, 30, 40 x, 100 x, you know, hopefully, you know over what it is or was today. So if you look, at uh uh quality.

Speaker 2:

I mean, most of the content on the web is like hd or full hd, but what about 4k and what does it mean for your?

Speaker 1:

end code. What does it mean for your? You support 8k.

Speaker 2:

You know that's coming one day, so hopefully there are also two types of uh of worlds coming together. We have broadcasting traditional broadcasting and we have internet, and on the internet we are used to having very dynamic solutions. I mean we have to change protocols, you have to change infrastructures and what is working today may not be working tomorrow, while in broadcasting people are used to build systems for life you build it, you don't touch it. And especially the people who have this more traditional broadcast attitude of engineering infrastructures, they're having a real hard time to understand the dynamics of the internet Because, as I said, tomorrow we can have. I saw that Safari from Apple, the new, latest beta, would actually introduce AV1. So then overnight, overnight, this industry can change. And then how will you change your encoding? And if you have you thought about the effects on your CDN and origins if you start? You know if you migrate from H.264 to AV1, probably not and then it will break, or your logging will break or your monitoring. So you have to think about more future-proof things.

Speaker 2:

And, by the way, I also saw that Apple pulled AV1 out of the latest beta release, so I'm really curious what's going to happen there.

Speaker 1:

Well, steph, this has been a really amazing discussion and you know I want to thank you for joining the Voices of Video and sharing all of your insights and what you built with us and with the audience. So thank you. Thank you, mark, and why don't you tell everybody where they can go to learn more about?

Speaker 2:

Jetstream. Oh, it's simple, it's jet-streamcom. Well, thank you.

Speaker 1:

This episode of Voices of Video is brought to you by NetInt Technologies. If you are looking for cutting-edge video encoding solutions, check out NetInt's products at netintcom.

People on this episode