Voices of Video

Unveiling the Future: RealSprint's Innovations in Low Latency Streaming and the Rise of AV1 Codec

NETINT Technologies Season 1 Episode 1

Transcoding with ASICs has gotten a lot of attention lately. From YouTube’s ARGOS VPU to Meta’s new ASIC that prompted noted compressionist David Ronca to comment that there are only two types of companies, “those that are using Video Processing ASICs in their workflows, and those that will.’

Based in Sweden, RealSprint provides live-streaming solutions to companies around the globe. When choosing a transcoder for the AV1 capabilities of its Vindral Live CDN, RealSprint considered software transcoding, as well as GPU and CPU acceleration and ASICs. Not surprisingly, RealSprint made the same decision as YouTube and Meta and chose ASICs.

Listen to Jan Ozer from Streaming Leraning Center and Daniel Alinder from RealSprint, as they discuss the role of hardware encoding and ASICs to enable ultra-low latency 4K streaming with the AV1 codec.

Daniel takes us on a journey through the creative forces driving RealSprint and introduces Vindral, their cutting-edge CDN-type product designed for low latency and high-quality streaming. You'll learn about the innovative ecosystem RealSprint thrives in and how it leads to groundbreaking solutions in video and broadcasting.

We unravel the technical complexities of video encoding, focusing on the critical choice between ASICs and GPUs, especially in scenarios demanding real-time performance. Daniel shares how these decisions are shaped by factors like quality, density, cost, and the global chip shortage. Tune in to understand why AV1 is emerging as the codec of choice for modern streaming solutions, offering a perfect blend of efficiency and performance, particularly with the support of technologies like NETINT ASIC.

Join us as we delve into the debate on low latency streaming in the video industry. Daniel and I explore the evolving audience expectations and the potential of next-generation technologies to transform the landscape. We navigate through the complexities of synchronization in broadcasts and the economic considerations of adopting low-latency solutions across different regions. By the end, you’ll see why the dismissal of low latency might soon become a relic of the past as the industry gears up for inevitable change.

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

Speaker 1:

Voices of Video. Voices of Video. Voices of Video. Voices of Video.

Speaker 2:

Welcome everyone. I'm Jan Ozer from NetEnt. I'm here today with Daniel Olinder from RealSprint and we're going to talk about his new CDN-type product, which is called Vendrol. Hey, daniel, thanks for coming. Why don't you tell us a little bit about RealSprint?

Speaker 1:

Hey, thanks for having me. I would love to tell you about RealSprint. You're going to have to tell me when to stop because I might go on for too long. You just let me know. I mean RealSprint. We're a Swedish company. We are based in northern Sweden, which is kind of a great place to be running a tech company. Honestly, we're in a university town. Any time after September it gets really dark outside for most parts of the day, so people generally try to find things to do inside. So it's a good place to have a tech business because you'll have people spending a lot of time in front of their screens and creating things.

Speaker 1:

So heavily culture focused team of around 30 people, mostly here in northern Sweden. We have some in Stockholm and and also in the US. So so the company itself started as a, as a really small team that did not have the the end game figured out yet. Uh, around like was it 10 years ago or something, uh, but they didn't want it to do something around video, around broadcasting and streaming. So um, um, yeah, it's from there. It's grown and today we're 30 people well, let's talk about vindrill.

Speaker 2:

Uh, at a high level. What is Vendral? And I feel like I'm mangling the name. How would you pronounce it?

Speaker 1:

No, you're doing it great, Vendral. That's correct. That's correct. It's actually a product family. There is a live CDN, as you mentioned, and there's also a video compositing software. So what we're going to be talking about today is the Live CDN, which is it's been live in earlier generations and now finally in. I think it's around five or six years that it's been running for 24-7. Years that it's been running for 24-7. And the product was born because we got questions from our clients around latency and quality, Like, basically, why do I have to choose if I want low latency or if I want high quality? Because there's solutions on both ends of that spectrum. But when we got introduced to the problem, there wasn't really a good solution. We started looking into real-time technologies, like WebRTC, for example, and quickly found that the state of it and still is that it's not really suitable. If you want high quality, it's amazing in terms of latency. But the client reality you know there's not always that you can't go all in on only one aspect of a solution. You need something that's balanced.

Speaker 2:

Draw us a block diagram only one aspect of a solution. You need something that's balanced. Draw us a block diagram. So you've got your encoder, you've got your CDN, you've got software. So draw that for the people listening to us.

Speaker 1:

Yeah, definitely. So I mean, we can take a typical client. Maybe they're in entertainment or they're in gaming, so they have their content and they want to broadcast that to a global audience, what they do. Generally, the most standard way of using our CDN is they ingest one signal to our endpoint and there's several ways of ingesting several of those transfer protocols, and the first thing that happens on our end is we create the ABR ladder.

Speaker 1:

We transcode to all the qualities that are needed, naturally, because there's network conditions are different in different markets and also even in places that are well connected. Even just the home Wi-Fi can be so bad at times. Honestly, there's a lot of jitter and latency and things going on there. So, after the ABR letter is created, the next box fans out to the places in the world where there are potential viewers, to the places in the world where there are potential viewers, and from there we also have Edge software as one part of the one component of this. And, lastly, it's the signal is received by the player that's instanced on the device. So that's basically what you were asking, right.

Speaker 2:

Yeah. So you've got an encoder in the middle of things creating the encoding ladder. Then you've got the CDN distributing. What about the software that you've contributed? How does that work? So I log into some kind of portal, I would guess, and then administrate through there.

Speaker 1:

Exactly, I mean if you have. So take a typical client. We have clients in iGaming, for example. They're running 50 or 100 channels and they want their usage and they see all of the channel information that they would need, Because it's a very important part, of course, of any mature system that the client understands what's going on and for that reason as well.

Speaker 1:

I mean one of the topics for today. When it comes to encoding, I would say that the encoding is particularly important for us to solve because we have loads of channels running 24-7. And your typical client is broadcasting for 20 minutes a month or something like that. Then of course, the encoding load is much lower In our case. Yes, we do have those types, but many of our clients are heavy users and they own a lot of content rights and therefore the encoding part is several hundreds of terabytes ingested. Only one quality for each stream monthly. On the ingest side.

Speaker 2:

Okay, so you're encoding ABR. Which codecs are you supporting and which endpoints are you supporting?

Speaker 1:

Yeah, so codec-wise we're actually we have chosen to. I mean, everybody does H.264. Of course, that's the standard in live when it comes to live streaming with low latency and we have recently added AV1 as well, which was something we announced as a world first. I mean, we weren't world first with AV1, but we were world first with AV1 at what many would call real-time. We call it low latency. So we chose to add that because there's a lot of I mean, there are pointers in the market pointing to AV1, just in short.

Speaker 2:

Okay, and which devices are you targeting? Is it TVs, smart TVs, mobile, the whole gamut.

Speaker 1:

Yeah, I would say the whole gamut. Actually, that list of devices is steadily growing. So I would say I'm trying to find some way of putting if there's any devices that we don't support Because as long as it's using the internet, we're delivering to it. So I would say I mean, if you have any desktop browser, any mobile browser, including iOS as well, which is basically the hardest one, including iOS as well, which is basically the hardest one If you're delivering to iOS browsers that are all running iOS Safari, we're getting the same performance on iOS Safari. And then I mean Apple TV, google Chromecast, even Samsung and LG TVs and Android TVs. There's a plethora of different devices that our clients require us to support 4K 1080p HDR SDR.

Speaker 1:

Yes, all of these things? The answer is yes. Well, I mean understanding that we. One very important thing for us is, of course, to prove the point that you get quality while getting low latency sports, and their viewers are used to watching this under their television, maybe a 77-inch or 85-inch TV. You don't want that user to get a 720p stream, so you need it to be high quality. And since it is kind of a point that we want to make because we can do it as well, so this type of a client would typically say, okay, there's a configurable latency, so maybe they pick a second of latency or 800 milliseconds and they want 4k to be maintained on that latency. That that's. That's one of the use cases where we shine practically. There's also a huge market for lower qualities as well, where that's important. So you mentioned ABR ladders. Yes, I mean. There's markets where you get 600 kilobits per second on the last mile. You have to solve for that as well.

Speaker 2:

So your system is the delivery side, the encoding side. Which types of encoders did you consider when you bought? You know, chose the encoder to fit into Vendral.

Speaker 1:

There's actually two steps to consider there, just to avoid any misconceptions. I mean the client. Often, depending on whether we're doing an on-prem or off like cloud solution for them, the client often has their own encoders. I mean many of our clients. They're using like Elemental or something just to push the material to us. But on the transcoding, where we generate the ladder, unless we're passing all qualities through, which is also a possibility there are of course, different ways of different directions to go and they all fit in different scenarios. So, for example, if you take an Intel CPU-based and you use software to encode, that is a viable option in some scenarios not all scenarios and there's NVIDIA GPUs, for example, which you can use in some scenarios, because there's many factors coming into play when making that decision.

Speaker 1:

I would say the highest priority of all is something that our business generally does bad, that is, maintaining a business viability. So you want to make sure that any client that is using your system can pay and they can make their business work. Now, if we have channels that are running 24-7, as we do then, and we also have it's in a region where it's not impossible to allocate, like bare metal or co-location space, then that is a fantastic option in many ways. So we've those three different like CPU-based, and then we have GPU-based and ASICs. Those are the three different that we've looked into.

Speaker 2:

So how do you differentiate? I mean you talked about software being a good option in some instances. When is it not a good option?

Speaker 1:

I mean, no option is good or bad in a sense, but if you compare them, both the GPU and the ASIC outperform the software encoding when it need to spin it up, spin it down and you need to move things. You need it to be flexible, which is quite honestly in the lower what do you say? The lower revenue parts of the market. When it comes to the big broadcasters, the large rights holders, the use case is heavier and you get many channels, you get usage a lot over time than the GPU and especially the ASIC make a lot of sense.

Speaker 2:

Okay, and you're talking there about density. What is the quality picture? A lot of people think that software quality is going to be better than ASICs and GPUs.

Speaker 1:

How do they compare? Well, it might be. In some instances we found that quality when using ASICs is fantastic and it's all depending on what you want to do. Because we need to understand we're talking about low latency. Here. We don't have the option of two-pass encoding or anything like that about low latency. Here we don't have the option of two pass encoding or anything like that. We need everything needs to work at real time. So our requirement on encoding is it. It takes a frame to encode and that's, that's all the time that you get.

Speaker 1:

Um, but I would say I mean you mentioned density. There's a lot of other things coming into play. I would say, well, quality, definitely there's. Also, I mean, even if you're looking at ASICs, you're comparing that to GPUs. Past two years have been like okay, there's a chip shortage, what can I get my hands on? That's even a deciding factor in some cases where we've had a client banging on the door and they want this to go live.

Speaker 1:

But going back to the density part, I mean that is a huge. That is a game changer, because the ASIC is unmatched in terms of number of streams per rack unit. If you just measure that KPI and you're willing to do the job of building your CDN in co-location spaces, which not everybody is, then that's, I mean, you have to ask yourself who's going to manage this. You don't want bloat when you're managing this type of a solution. If you have thousands of channels running, then even I mean cost is one thing when it comes to not having to take up a lot of rack space, but also you don't want it to blow too much.

Speaker 2:

I mean, how formal an analysis did you make in choosing, say, between the two hardware alternatives? Did you bring it down to cost per stream and power per stream, and did you do any of that math? Or how did you make that decision between those two options?

Speaker 1:

Well, in a way, yes, but I would say, on that particular metric, I would just say we need to look at the two options and say, well, this is at a tenth of the cost. So I'm not going to have to give you the number because I know it's so much smaller. So that's basically. Of course, we know what our we're well aware of what costs are involved. But the cost per stream it all depends on profiles et cetera.

Speaker 1:

But just comparing them, yes, we've, naturally, we've looked at, like, started encoding streams, especially in AV1, and you look at what the, what the actual performance is, how much load there is and and what's happening on on the cards and how much you can, uh, how much work you can put on them before they start giving in. So so that's, I mean, of course that's, but but then then again, when there's such a big difference. So there's one thing to mention here. I mean take, for example, a GPU, great piece of hardware, but it's also kind of like buying a car for the sound system, because if I'm buying an NVIDIA GPU to encode video, then that might be like I'm not even using the actual rendering capabilities. That is the biggest job that the GPU typically is built for. So that's one of the comparisons to make, of course.

Speaker 2:

What about the power side? How important is power consumption to either yourself and your customers?

Speaker 1:

It is very important. It's actually. I mean, I remember we had a conversation. When was that IBC? It was September, and even since then, I mean, if you look at the energy crisis and how things are evolving, the typical offer you'll be getting from the data center is we're going to charge you 2x the electrical bill and that's never even been something that's been charged, because they don't even bother. Now we're seeing the first invoices coming in where the electrical bill is actually a part.

Speaker 1:

I mean, if you look at Germany, it just peaked in August the energy price. It was at 0.7 euros per kilowatt hour. That's amazing, yeah, that's amazing. And I mean Germany. You have Frankfurt, which is one of the major exchanges. That is extremely important. If you want performance streaming, you're going to have to have something in Frankfurt.

Speaker 1:

So that's one part of it, if you don't mind me just going on here, because there's another part of it as well, which is, of course, the environmental aspect of it. One thing is the bill that you're getting. The other thing is the bill we're leaving to our children. So it's kind of contradictory because many of our clients they, they make travel like unnecessary, that you have a norwegian company, uh, that that we're working with that are doing remote inspections of ship hulls, so they were the first company in the world to do that, and instead of flying in an inspector and flying in the ship owner and two divers to the location, there's only one operator of a little underwater drone that is on the location and everybody else is just connected. So that's obviously a good thing for the environment, but what are we doing? Our own footprint, which is also, of course, something that we need to consider. So, except for the price, there's also I mean, winter is coming, there's an energy crisis.

Speaker 2:

So let's switch gears. Let's talk about AV1. Why did you decide to lead with AV1?

Speaker 1:

That's a really good question. Av1, I would say there's several reasons why we decided to lead with it. It is very compelling, I mean as soon as you can do it in real time, because we had to wait for somebody to really make it viable, which we found with the NetEnt ASIC to do it viable at high quality and with a latency and reliability that we could use, and also, of course, with throughput, so we don't have to buy too much hardware to get it working. But what we're seeing are markers that our clients are going to want AV1. I'm sorry, Our clients are going to want be one, and there's several reasons why that is the case, One of which is, of course, it's license-free.

Speaker 1:

So if you're a content owner, especially if you're a content owner with a large crowd, so you have many subscribers to your content, that's a game changer. I mean, the cost that you have for licensing a codec can grow to become a not insignificant part of your business. I mean, even look at what's happening with Fast, for example. Like Fast, free ad supported television. There you have even more free ad-supported television. There you have even more. You're trying to get even more viewers and you have lower margins and what you're doing is actually creating eyeball minutes, and if you have a codec that costs license costs, that's a bit of an issue. It's better if it's free.

Speaker 2:

Is this what you're hearing from your customers, or is this what you're assuming they're thinking about?

Speaker 1:

That's what we're hearing from our customers. Yes, that's why we started implementing it, because I mean, there's also for us, there's also the bandwidth to quality aspect, which is great. But why we believe in it is because that's what we're hearing, and I mean, I'm going to say my belief is that it will explode in 2023. Because, for example, if you look at just what happened one month ago, google made hardware decoding mandatory for Android 14 devices. That's both phones and tablets, and that's I mean when even the phone my little Samsung S22, the EU version here supports it for decoding. It opens up so many possibilities. So we were actually maybe not expecting to get business on it quite yet, but we are, which is, I mean, I'm happy for that. But there's already clients reaching up because of the licensing aspect and also some of them, they're transmitting petabytes a month, which is, if you can bring down the bandwidth and retain the quality, that's a good deal.

Speaker 2:

You mentioned before that your systems allow the user to dial in the latency and the quality. Could you explain how that works?

Speaker 1:

We're going to have to make a difference between the user and the broadcaster.

Speaker 1:

So, our client is the broadcaster that owns the content and they can pick the latency. So the way it works is so. Vendral Live CDN doesn't. It's not on a fetch your file basis. The way it works is we're going to push the file to you and you're going to play it out and this is how much you're going to buffer. So once you have that set up and, of course, a lot of sync algorithms and things like that at work, then the stream is not really allowed to drift.

Speaker 1:

A typical use case where you have take live auctions, for example, the typical setup for live auctions 1080p and you want below one second of latency because people are bidding. It's a and there's also people bidding in the actual auction house. So there's fairness, the fairness aspect of it as well. So, uh, what? What we typically see is they they configure maybe a 700 millisecond buffer and and it makes it possible Even that smaller buffer makes such a huge difference. What we see in our metrics is that it's basically 99% of the viewers are getting the highest quality stream across all markets, so that's a huge deal.

Speaker 2:

How much does the quality drop off? That's a huge deal. How much does the quality drop off? I mean, what's the lowest latency do you support, and how much does the quality drop off at that latency as compared to one or two seconds?

Speaker 1:

I would say that the lowest that we would maybe recommend somebody to use our system for is 500 milliseconds. So you have like that would be about 250 milliseconds slower than a WebRTC based like a real time solution. And why I say that is because there, other than that, I see no reason to use our approach. Like if you don't want a buffer, then maybe it's better to use or I mean not better, but it's just as might just as well use something else or I mean not better, but it might just as well use something else. So that's why actually I would say we don't have many clients trying that out, because most of them, if there's, I think 500 milliseconds is the lowest somebody set and they've been like this is so quick, we don't need anything more and it retains 4K at that latency.

Speaker 2:

How does the pitch work against WebRTC? If I'm a potential customer of yours and you come in and talk about your system compared to WebRTC? What are the pros and cons of each?

Speaker 1:

I'm going to do this as non-salesy as I can. I mean, I don't want to be sitting here. It's a webinar. I don't want to be sitting here just selling the things that we do.

Speaker 2:

So I'm going to talk about the pros of. Yeah, it's an interesting technology decision and I'm not looking for you to sell the platform as much as you know, because I know that WebRTC is going to be potentially lower latency, but it might only be one stream. It may not come with captioning, it's going to be the ABR, so it's talk about that type of stuff because you're playing in their space and it's interesting to hear, technology-wise, how you differentiate.

Speaker 1:

I would say, from a perspective of when you should be using which. If you start in that end, you need to, if you need to have a two-way voice conversation, you should use webrtc. That's that I'm sure of, because I mean I there's actually even studies that have been made like, if you bring the, the latency up above, like I think it's 200 milliseconds, a conversation starts feeling awkward. If you have half a second, it is possible, but it's not good. So if that's the, if that's an ultimate requirement, then weber to see all day long now that. So where it differs is actually they're actually very similar, they're actually, uh, actually the main difference I would point out is that we have added this buffer that the platform owner can set so that when the player is instanced it's at that buffer level and WebRTC currently does not support that.

Speaker 1:

Webrtc currently does not support that and I will say I mean, even if it did, we might even implement that as an option. I feel that it's like really it might go that way at some point. Today it's not so definitely. On the topic of differences, then I would say if 700 or 600 milliseconds of latency is good for you and quality is still important, then you should be using a buffer and using our solution and, as you mentioned, there are. It's also a lot of. When you're considering different vendors, the feature set and what you're actually getting in the package. There are huge differences. As you mentioned, some vendors might not even maybe on their lower tier products ABR is not included, things like that, where it's kind of, yeah, you should be using ABR definitely.

Speaker 2:

What's the longest latency? You see people dialing in.

Speaker 1:

You know you talked about the shortest, yeah we've actually had one use case in Hong Kong where they chose to set the latency at, okay, so 3.7 seconds. If I remember it correctly, that was because the television broadcast was at 3.7 seconds, because that's the other thing. We talk a lot about latency. Latency is a hot topic but honestly, many of our clients, they value the synchronization even above latency Not all clients, but some of them.

Speaker 1:

If you have a game show where you want to react to the chat and have some sort of interactivity, maybe you have like 1.5 seconds. That's not a big issue if it's at 1.5 seconds of latency and you will get a little bit more stability, naturally, since you're increasing the buffer. So some of our clients, they've chosen to do that, but around three and a half. That's actually the only client we've had that have done that before, but I think there could be more in the future, especially in sports, because if you have, like, the satellite broadcast is at seven seconds of latency, the thing is we can match that. We can match it onto the hundreds of milliseconds.

Speaker 2:

And the advantage of higher latency is going to be stream stability and quality. You know what's the quality differential going to be.

Speaker 1:

Definitely, I mean. But I would say, as soon as you're above one, even one second, as soon as you're above there, there are diminishing returns. It's not going to be like it unlocks this whole. Yeah, on extreme markets it might, but I would say, even okay, if you're going above two seconds, you've kind of you're done. You don't have to go higher than that. At least our clients have not found that they need to, and for them the markets are basically from East Asia to South America and South Africa, because we've expanded our CDN into those parts.

Speaker 2:

So you've spoken a couple of times about where you install your equipment and you're talking about co-locating and things like that. What's your typical server look like? How many encoders are you putting in it? What type of density are you expecting from that?

Speaker 1:

Well, that's going to be different based on what we like ASIC cards, for example, we can install several of those into each server and every ASIC card, depending on profile. I think we can do. I'm not going to do the guessing. I'm going to have to get back to that if I'm sharing numbers.

Speaker 2:

Will just tell us in general.

Speaker 1:

Yes, so in general it would be something like one server can do 10 times as many streams if you're using the ASIC than if you're using a GPUs like Nvidia, for example. But I wouldn't. Maybe on one view can you do 20 streams, Maybe something like that, and I'm factoring in the ABR ladders into that.

Speaker 2:

So 20 ladders or 20 streams 20 ladders is what I mean.

Speaker 1:

I wasn't going to do this because my tech guys are going to tell me that I was wrong.

Speaker 2:

Let me look for a couple of questions from the audience. We've got one, and this is a question about your system. Can you create an ABR ladder with StatMux? Is that something you would know offhand?

Speaker 1:

We don't have a StatMux integration. I'm actually not integration. I'm actually not very well versed in StatMux. It's something that we should be able to do. If I'm not mistaken, it's actually using HEVC. Is that correct?

Speaker 2:

I don't know.

Speaker 1:

Which is I mean, if it is, it's definitely something that we could do. We have not implemented it right now, but if there's a market demand for it, we say that on our egress side we are codec agnostic, so in a sense, that if there is a demand, yes, we'll solve it, that's no problem for us.

Speaker 2:

Another question is what is the cost of low latency?

Speaker 1:

Okay, sorry, that's a very general question.

Speaker 2:

If I decide to go the smallest setting, what is that going to cost me? I guess there's going to be a quality answer. There's going to be a stability answer. Is there a hard economic answer?

Speaker 1:

My hope is that there shouldn't be a cost difference in some, depending on regions. So I'll be honest here Depending on regions, the way we operate we've chosen to, as I mentioned before, it's about the design paradigm of the product that you've created. We have competitors that are going with one partner. They've picked Cloud Vendor X and they're running everything in their cloud and then what they can do is what they can do. They've made a deal with their cloud vendor and that's the limit.

Speaker 1:

Now we get requests. For example, we had an AV1 request from Greece, huge egress that I was blown away by. That case actually existed, that big an internet TV channel and they mentioned their pricing. Because we asked them like okay, so what's your HLS pricing? We want this to make sense for you, because they were asking us because they wanted to save costs by cutting their traffic by using AV1. So what we actually did with that request is we went out to our partners and our vendors and we asked them can you help us match this? And we did.

Speaker 1:

So I'm going to say, from a business perspective, yes, it might in some cases cost more, but there is also an image that plagues the low latency business of high cost, and that is because many of these companies have not considered their power consumption, their form factors, actually being willing to take CapEx investments instead of just running in the cloud and pay as you go, many of those things that we've chosen to put the time into so that there will not be that big a difference.

Speaker 1:

I know that some of our bigger partners take, for example, tata Communications they're pricing I mean, they're running our software stack in their environments to run their VDN, and it's on a cost parity. So that's something that should always be the aim. Then I'm not going to say it's always going to be like that, but that's just the short, like when you're talking about the business implications. I think we're often getting the requests where the potential client has this, they think that it's going to be a very high cost and then they find that, well, this makes sense, we can build a business on it.

Speaker 2:

Interesting. Are you seeing companies moving away from the cloud towards creating their own co-located servers with encoders and producing that way as opposed to paying cents per minute to different cloud providers?

Speaker 1:

I would say I'm seeing the opposites, and too much because we're doing both Just to be clear. I think the way to go is to do hybrid, because for some clients they're going to be clear. I think the way to go is to do hybrid Because for some clients they're going to be broadcasting 20 minutes a month. Cloud is awesome for that. You just spin it up when you need it and you kill it when it's done, but that's not always going to cut it. But if you're asking me what motion I'm seeing in the market, it's more and more of these companies that are deploying across one cloud and that's where it resides. There's also the types of offerings that you can instance yourself in third-party clouds, which is also an option, but again, it's the design choice that it's a cloud service that uses underlying cloud functions. It's a shame that it's not more of both. It creates an opportunity for us, though Maybe I shouldn't be saying this.

Speaker 2:

Finishing up. What are the big trends that you're chasing for 2023 and beyond? What are you seeing? What are the big trends that you're chasing for 2023 and beyond? What are you seeing? What are the forces that are going to impact your business, the new features you're going to be picking up? What are the big technology directions you're seeing?

Speaker 1:

I mean for us on our roadmap. We have been working hard on our partner strategy and we've been seeing a higher demand for white label solutions, which is what we're working on with some partners. We've done a few of those installs and that's where we are putting a lot of effort into, because we're running our own cdm, but we can also enable others to do it like as, even as a managed service. It's uh, it, then you, I, then you have these telcos that are. Some of them have maybe an HLS offering since before and they're sitting on tons of equipment and fiber. So that's one thing, but I mean, if we're making predictions, I would say also two things. Worth a mention are I would expect the sports betting market, especially in the US, to explode, and that's something that we're definitely keeping our eyes on. Maybe live shopping becomes a thing outside of China it is, I mean, many of the big players that are, for example, the big retailers and even, like financial companies, are working on their own offerings in live shopping. But if I can add something to the wish list, I would say I wish for the coming few years that I don't know if I've told you about the dinosaurs agreement, no, okay, so it's comparable to a gentleman's agreement.

Speaker 1:

There is and this might be provocative to some, and I get that, it's complicated in many cases, but there is sort of among some of the bigger players and also among independent consultants that have different stakes um, they're, they're asking the question do we really really need low latency or do we really really need synchronization?

Speaker 1:

And while a valid question, I get it, it's valid, but it's kind of also a self-fulfilling thing, because as long as the bigger brands are not creating the experience that the audience is waiting for them to create, nobody's going to have to move. So what I'm calling the dinosaurs here is they're holding on to the thing that they've always been doing and they're optimizing that, but not moving on to the next generation. And the problem they're going to be facing, hopefully, is that when it reaches critical mass, the viewers are going to start expecting it and that's when things might start changing. So I mean, I totally understand that there's many workflow considerations. Of course, there's tech legacy considerations, there's cost considerations and different aspects when it comes to scaling. But saying that you don't need low latency, that's a bit of an excuse, I'd say.

Speaker 2:

We're out of time. I appreciate you coming on board and sharing all this information with us, and good luck with the service going forward.

Speaker 1:

Thank you. Thank you. This episode of Voices of Video is brought to you by NetInt Technologies. If you are looking for cutting-edge video encoding solutions, check out NetInt's products at netintcom.

People on this episode