Voices of Video

The Engine Isn’t Enough: Building Robust Media Frameworks Around the VPU

NETINT Technologies Season 3 Episode 21

The beating heart of every video streaming service is its encoding technology, but raw power alone isn't enough to deliver exceptional viewer experiences. In this eye-opening conversation, Mark Donnigan explores what happens when you combine the incredible performance of Video Processing Units (VPUs) with thoughtfully designed software frameworks.

Mark Donnigan compares the VPU to a high-performance engine – essential and powerful, but ultimately useless without the surrounding vehicle.

Dominique Vosters explains: “Initially performance was the key differentiator, but going beyond that, you can make the system even better with the whole software layer around it.” He details how Scalstrm has been building resilience, redundancy, and flexibility into complete media processing systems that transform raw encoding capability into production-ready solutions.

Alexander Leschinsky draws an analogy to networking hardware: VPUs are like ASICs inside routers  -  immensely powerful but only useful when paired with robust frameworks and tested workflows. He stresses that integrators must combine VPUs with CPUs or GPUs when unusual formats (like deinterlacing or MPEG-2) are required, and that customers ultimately want battle-tested reliability rather than raw interfaces.

Together, the guests reveal:

  • VPUs can provide 10x efficiency improvements, but need software frameworks to create complete solutions.
  • Format diversity remains challenging — from deinterlacing to supporting 32 audio channels per stream, as in the European Parliament project mentioned by Alexander Leschinsky from G&L.
  • Some formats must be handled outside the VPU, either on CPUs or other workflow stages.
  • Dominique Vosters notes that open-source tools like FFmpeg can be useful for proofs of concept but fall short for live production due to resilience gaps.
  • Alexander Leschinsky highlights the distinction: FFmpeg is great for controlled VOD environments, while commercial solutions deliver better results in demanding live workflows.
  • Total cost of ownership is a top driver for adoption: both guests stress that VPU acceleration reduces hardware requirements, lowers power use, and brings sustainability benefits.
  • Alexander Leschinsky even showcases a Raspberry Pi with an M.2 VPU card powered over Ethernet, demonstrating extreme edge efficiency in action.

As Dominique Vosters emphasizes, understanding business requirements must come before technical decisions when migrating to new encoding solutions. The software frameworks around VPUs are just as important as the VPUs themselves.

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

Dominique Vosters:

voices of video voices of video.

Mark Donnigan:

The voices of video voices of video welcome back to this amazing edition of the voices of video. So we are in a buildup to IBC. It's about T-minus five, five and a half weeks Hard to believe. That also means that summer is over and we are coming into the fall, which for some of us in warmer climates is probably very welcomed. But in all seriousness, this is a real exciting time because I get to be joined by our VPU ecosystem partners and we're holding a whole series of conversations for you, for the listeners, both about what is happening in the ecosystem and then what you will see and the conversations you will have at IBC. So with that I am joined. Again, Dominic from ScaleStream, Thank you for being on. Welcome to Voices of Video.

Dominique Vosters:

I'm glad to be here again, hi.

Mark Donnigan:

And Alexander, I think you know we've done five or six episodes now together, I think, across our various shows and tales, I lost count. Exactly exactly.

Alexander Leschinsky:

It's good to be here again. Well, it's really great to have you.

Mark Donnigan:

It's good to be here again. Yeah Well, it's really great to have you, it's great to have both of you, dominic. We've also talked multiple times, so you know we were preparing and you know brainstorming, I guess, about what we wanted to talk about in this particular interview segment segment. And I'm really excited about this discussion because the VPU is.

Mark Donnigan:

Often we characterize it as the engine. It's the engine of the video encoding workflows, which are, of course, essential. They're paramount to what we all do in terms of operating services, delivering content. But there is a whole software framework around the engine. You know, just like a car. You know you could have an engine sitting in your garage, but it's not going to be very useful just by itself, and yet it's so essential without that engine. And the better the engine is, you know, maybe the faster the car can go, it can use less fuel, all those good things. So I think it's a great analogy. So today we're going to talk about the software around the engine, the software around the VPU, and who wants to start, because I know both of you had some examples of where you're involved in discussions, projects, where the software is a really central component.

Dominique Vosters:

I don't mind starting first. No, I think it's a good topic to discuss because often we forget at the engine. It's important to have a very powerful engine. And then this is how we came into contact. What is it? More than a year ago actually last year at IDC, I think the first time and then I with the powerful engine you can be sustainable, and then I do much more on one server.

Dominique Vosters:

But yeah, if you start into start digging into the details and go into live production systems, there's a lot more around it than the engine itself. And then, mainly from the customer point of view, there are different formats of signals. There may be interruptions into the network, so there's a lot of use layer behind it or beyond it. And then this is what we figured out during the year, and you can use standard tools running on the CPU cards, but it's important to build a whole layer around it. And this is what we've done in the last few months, where we built a rock solid system together with the engine. So mainly on resilience, redundancy, speeches around it, uh, also other flexibilities to spin it up with apis and and so on. So I, I think this, this, uh, I initially the, the, the performance was the key differentiator, but if you go beyond that, then yeah, even with the powerful engine, you can make the system even better with the whole software layer around it, and this is what we've been focusing on in the last few months.

Mark Donnigan:

Let's say, yeah, alexander, I know this is what you do as an integrator.

Alexander Leschinsky:

Yeah, of course we interface with all kinds of hardware cards in very different ways, and when we first learned about the VPUs, it was great to see that they are supporting both the low interface, that that we would not touch because that's that's one level too low, be beyond what we usually do. But there was also ffmpeg and gstreamer, so we would have two mainstream open source tools with which we could access these cards. However, as we do most of our media processing based on the norsk sdk from ideas that that was our main interface to the cart. But we knew all the time we could interfaces with different tools in different ways, so that gave us some leeway around how to approach this. So in the end, we usually, for live processing, just use the Norsk SDK, which has a lot of advantages for us here, and what we see is that the power itself, the engine itself, is very important and it's the core of what you can do with these cards, and it's amazing what kind of density we can achieve with that, how powerful they are. But in a way, it's like so it's an ASIC under the hood and if you look at other fields where you have an ASIC, it's most prominently, for instance, in switches and routers.

Alexander Leschinsky:

They have also ASICs that do the heavy work, but the ASIC, there as well, is not accessible to you in its raw form. You need something that interfaces with it, and there you have different operating systems on these switches and you have different tools of software that come all with their pros and cons. So it's important that you have that independent piece of hardware that you can access in different ways. But then for a specific use case, for a specific customer group, you boil it down to something that fulfills their needs completely, where you completely adopt to what they want to do with the system. And so that is where you leave that universal FFmpeg interface and rather build something that is more battle-tested, more reliable, and part of that is also the testing and just seeing.

Alexander Leschinsky:

This is what the card can do in principle, but this is the subset of use cases that I support and that I have thoroughly tested so that I can testify to the customers. If you want to do that workload, we've exactly tested that workload that works. We can put our guarantee on it. That's one part of how to work with such a physical card system, and the other part is there is a lot of functionality that the card covers, but it does not cover all. It's a specialist and it does what it does very good, but it does not do everything. And there still is part of at least what we have to do in our day-to-day business where we have functionality that is not available on the card, so we have to combine it with software-based or with GPU-based transcoding. So it's good to have these options.

Mark Donnigan:

Yeah. So this is a very good point, because one of the challenges that the industry has to grapple with, or an engineer has to grapple with when adopting a VPU, is that fundamentally, you know, as we've just said, what we're bringing, what the VPU is, is the engine. But you know, the analogy is okay, that's great. You know. It's like I happen to have a friend who's actually builds cars. You know he builds hot rods and you know, and so you know, maybe everyone kind of has that one friend you know who actually would love to just take an engine and build a car, but for me I'm going to have to go out and buy a car. I need one that's already built. That's, you know, done for me both because I don't have the ability to build one and I don't want to build one. You know, even if I could, I could.

Mark Donnigan:

So I think a lot of the market is in that same position, right, and, more importantly, most workflows that I see and I'm curious to get your reactions and response, see if it's similar, what they're using today are sort of, you know, you could say, generically complete solutions. So then the challenge is okay, I want to improve my efficiency and my throughput by 10x. How do I do that? Well, I can do it with VPUs. However, you know I still need all of the software around that.

Mark Donnigan:

You know I often call it the media processing framework. You know, which you could say is the software around that I often call it the media processing framework, which you could say is the software transcoder. What I'm interested in and I'm going to start with you, alexander, and then let Dominic share from their perspective what they see but I know the projects that you get involved in and I'm not going to steal your thunder, so I'll let you give some insights. But you know you have to deal with incredible diversity of standards and input types and formats and you know sometimes very unusual requirements and people may say why is that? Well, it doesn't really matter.

Alexander Leschinsky:

That's the requirement of the project you know, it is yeah, exactly, and and sometimes it feels as if, as if most of us in the in the industry are part of a very aligned bubble of engineers that talk about a very specific, well-rounded set of tools that you can work with.

Mark Donnigan:

But if you go outside of that bubble, in the real world, I would say In the real world.

Alexander Leschinsky:

Yes, there is a lot of other stuff that people still want to have and want to see and have to work with. So we work a lot in tenders, as I think Dominik does as well. So what we saw in a tender that we won for the European Parliament we talked about it before is something where we have to do deinterlacing. That's the first thing. You wouldn't think deinterlacing necessary in 2025, but it happens for some or the other reasons. At least it's only in the input, not in the output. That's a good thing. Then we have 32 audio channels, which is also something that is scaling to a lot. If we have 10 channels, that's 320 audio channels. That's a lot of audio transcoding that we have to do.

Alexander Leschinsky:

In another project, another tender that we have been working on, we got a whole other bunch of combinations that the customer has asked for in the pricing sheet, and this included some weird options where we asked him do you really think this is necessary?

Alexander Leschinsky:

And he said yes. We have specific customers all over the world where we have gathered these requirements and, for instance, one thing where the customer needs a price for transcoding of an input format which is UHD 4K input with ABC H.264 encoding and interlaced, and he wants to get UHD and interlaced out of that. So that's something that you would only expect, I think, in some 80s retro party, where you have a Back to the Future party, where you have music on your quantum computer, something weird like that. But it happens in the real life. We see that and at the same time we see a request. But it happens in the real life. We see that, and at the same time we see a request for MPEG-2 video codecs, which also are not supported by the VPU card but are still in practice and there are still use cases where we have to cover it. So there is some functionality that we have to add, either in CPU or in GPU, on top of what we do with the VPU card.

Mark Donnigan:

So something and, dominic, I'm going to get to you, but something that probably the listeners are saying okay, fine, great. So you're using VPUs, you're able to deal with this matrix of unusual format requests, standards, et cetera, but how are you doing that? You're running this all on the VPU. I don't think the VPU supports interlace decoding, so why don't you and actually I'd like both of you to explain your approaches to how you're solving this, because this is important to understand that you can still be using BPU and yet support standards and formats that are not fundamentally supported on the card.

Dominique Vosters:

Oh, correct, and then I, indeed some parts are. I think the engine as such is a powerful engine and then supports most of the formats. But taking interlaced, that's one thing I do, deinterlaced, for example, with the current card. We do this in CPU, for example, so I build a layer around it to deinterlace and then I handle it from there. There are also other formats and then our transcoding solution. I will integrate it with our origin and package it behind it. So in that sense it's a good thing that we have the flexibility that we can see where it makes most sense to fix certain issues.

Dominique Vosters:

Some parts we need to fix on the transcoding side, but since we feed the signal anyhow to our origin and packages, sometimes it makes sense to I don't know to handle it on the origin as such, because currently our transcoding solution, integrated with the origin and for now not intended as a standalone live transcoder For offline of course it's a separate case, but for live transcoding it's kind of integrated solution. And then this gives us a real benefit, in my opinion, because then you have one end-to-end system and we can analyze from the source till we deliver it. Even we can do the CDN, but I just say we can deliver it to the CDN and then we will make sure that everything is redundant, resilient, that we handle all the formats. Either we do it in CPU, of course, preferably. We do it in the CPU, of course, as much as possible, but some parts we just take out and handle in CPU, either on the transcoding side or on the origin side, and I think this is, in the end, a really good ecosystem and a really good architecture as well.

Alexander Leschinsky:

Yeah, and I think we love to work with your products, dominik. I think it's important also to see that you can operate on a very small footprint. You don't need so much capacity on these machines and you integrate a lot of these services into small either virtual machines on a cloud provider or on-prem. If you take the contrast to how many people are doing naive cloud installations, they have hundreds of services distributed and it's so easy to lose track and to underestimate the complexity of following up with has. And do a lot of things on a very small footprint is an amazing counterpoint to what you see in some over bloated cloud setups that we've seen with customers as well.

Mark Donnigan:

I love your use of the word Alexander naive, use of the word Alexander naive, you know, because I think and you guys must be seeing this in your RFPs and you know, the request coming in, but operational cost is under pressure across the board and you know it's super fascinating that, you know, in the past I think, operational cost was a little more driven, you know, I will, you know, very generically say, by the reality that maybe business was under pressure, revenues were down. In other words, you know, just like in our own personal lives, you know, if I'm not making as much money, I've got to cut how much money I'm spending. You know, and so it was more kind of just very practical. Now, what's really fascinating to me, and you know, and it's, it's terrible news, you know, when you read about someone like a Microsoft or a Meta laying off, literally, you know, sometimes 10,000 employees. But what's super fascinating is these companies are doing it sometimes in the middle of record profits and it's a head scratcher, right. And you know, again, I can't even imagine putting myself in a position if I were working in one of those companies and was facing it, the emotion I would feel, you know, like, hang on, you're eliminating my role and thousands of others, and yet we're posting record profits and our stock, you know, has never been higher, so that that's a whole, nother issue.

Mark Donnigan:

The point that I'm making, though, is that companies are now mandating that they get fit. You know, it's like this isn't about. Hey, we're running out of money, so we literally have to cut our operational budgets. This is why are we just spending money? Because that's just how we've always done it. You know and that is you know, in the context of what I just covered is bad news if you happen to be one of those affected, but for those of us who are providing services and products and solutions and technologies into these organizations, this actually is opening up tremendous opportunities, provided you know that the teams are aware of what's going on, you know, and so your description of the naive cloud and where you just have these big, bloated systems. Well, that certainly drove AWS profits for a long, for many, many, many, many, many years, but people are waking up, right? Is that what you're seeing?

Alexander Leschinsky:

I don't want to say that all cloud setups are naive. Not at all.

Alexander Leschinsky:

Yeah, that's not what we're saying, and thank you for pointing that out Really sophisticated cloud architectures, but I also have seen a lot of bloated stuff that simply try to use all the features that are there and not have in mind that in the end you have to manage it and things can get pretty complicated pretty quickly, and then they're getting expensive as well, so that that's something that you should avoid and many people don't, and that's where where the cloud setup often is not as as ideal as it could be. Um, the other thing is that what you say about companies having to watch their watch, their profit and their margins is absolutely right, and that is one of the main drivers that make our customers often ask for on-prem solutions. Now. They're like a trend back to actually do with CapEx instead of OpEx and cloud providers, and the interesting stuff is I think we're in a way, similar with ScaleStream, as the core of the application is can operate on software only. That's good. That makes it possible to move to a cloud provider and work wherever you want.

Alexander Leschinsky:

But if you have access to infrastructure or can influence how it is built or build it yourself, then you can use things like the vpu and and if you, if you go beyond the cpu and and operate on a vpu for a certain specific task video encoding in this case. You can also do the same with with other accelerators and cards, and that's how we use it in our day-to-day business, where we build these boxes ourselves, so we we use a gpu. We use nvidia cards where necessary. So we have several setups where we built these boxes ourselves, so we use a GPU. We use NVIDIA cards where necessary. So we have several setups where we just use a VPU for video encoding, not the GPU, although it could do it. We reserve the GPU for whispered live speech, to text transcription, so that's something that we can hand over to these specialized devices. And then we're using specific network cards from Mellanox where we can outsource the actual TLS encryption that we have to do if we deliver a lot of traffic over these cards, the TLS encryption that you notice if you go to a browser with HTTPS, then you have that encryption that shows that you get, that there's a certified host name. That encryption takes power and we'd like to take this off the CPU and offload it to some specialized network card. Those are the things that you can do if you have access to these hardware parts that you do not necessarily have or need or know about in a cloud setup. So you have more options to select the tools that you use to meet the budget that the customer has.

Alexander Leschinsky:

And as some nice example, I'm just looking here at a small cart that I'm currently working on. I don't know if you can see it. That's a small blade carrier and it has a Raspberry Pi 4 CPU core CPU on it. And this here is the Quadra card, which comes in the M2 form vector, and this is an interesting project that we do where we try to squeeze the maximum out of very tiny footprint. The whole card has something like 15 watts for the whole server and it's just power over ethernet. So this is a very tiny power part and you this is something that you can only do in hardware and there are a lot of use cases in the field on the road where you need that compute power but only have very few and not really reliable power. So you need something that that does not consume too much energy, that that's something the power of, of having um on-prem infrastructure under your that's right yeah, thank you.

Mark Donnigan:

Thank you for showing that, alexander. I'm uh, of course, aware of. You know some of these very, very interesting uh and and niche, uh, you know applications that you build. But, um, you know this, applications that you build. But you know this is for me personally, even though you know I'm a marketing person.

Mark Donnigan:

But you know, I started programming when I was 12. And, by the way, I do say programming because I'm old enough that I taught myself Pascal, cobol, fortran, my first year of computer science. I was learning all of those languages, so it's exciting to think about these really unique applications. But not only is that exciting, but that's really where the value is transferred. I think one of the things about the cloud is that you know it sort of abstracts away in some ways some of the fun, and so it's really cool when you can look at, like you say, a little Raspberry Pi form factor. You put a T1M on an M.2 connector on the board, you powered over Ethernet. I mean talk about edge, that's about as edge as you can get as edge as can be, yeah.

Mark Donnigan:

Yeah, yeah, exactly, exactly, it's really cool. Well, you know, I think. So I think let's bring the conversation back to talk about the role of open source FFmpeg. You know there's the FFmpeg project which obviously everybody knows works with, probably uses or has used in the past, gstreamer. You know there's GPAC. I mean there's a number of projects out there, right?

Mark Donnigan:

So I want to ask sort of a two-part question and get each of your response. So number one how are you using or are you using any of these open source projects in your own solutions? So that's the first part of the question Are you using? If so, what and maybe tell us what you're doing? And then the second is give some insights into where these projects are quite useful, where something like FFmpeg actually is kind of the best tool, when we talk about building this media processing framework around the engine, you know. And then do you have any insights where it's maybe not the best tool and maybe there's either a different project or maybe where you know you say, look, we just have to go to commercial solutions because, you know, because we just can't get the performance throughput, whatever.

Dominique Vosters:

Yeah, Well, I think from our side, and when the company was founded in 2017, everything was built from scratch. So we are not using any open source components or any scripts. Everything is built from scratch in optimized code or video script or everything is built from scratch in optimized code or video. So this is also where the main differentiator on performance is made. I think if we go to the VPU card also there for live transcoding, I fully use the VPU cards and then use the APIs, so we're not using MFM before any open source components. I have to work with the VPU cards, so I've used all the APIs to interact with the card and then build a whole software layer around it, and so I.

Dominique Vosters:

Normally we don't use open source. Coming back on the question where does it make sense? Yeah, from time to time we have. In the past we had some. There are some requests for a proof of concept where a specific format needed to be supported. To give an example, RTMP input, for example. At that time it could make sense to use an FFM page in front of it just to convert specific signals or content or types like that. But normally this is really not the default for us of types like that. But but normally this is really not the default for us. And then, even if if we would be using every rampage to support a certain use case I, we will always make sure that uh, we forced it or that we make sure that on our main software line that we will support it in uh I with own code and not open source. So even if it might be temporarily supported or if we might use temporary FFmpeg, we will always make sure that we optimize it in our own code and use the APIs to interact with the card. So that's a bit the strategy.

Dominique Vosters:

And then we've also come across a project where open source components were used, but then there were limitations and perhaps not FFmpeg as such, but two disadvantages in my opinion. First, flexibility and time to market. If there is an issue on FFmpeg side or whatever open source component, most likely they will fix it, but it will take time. So we have to report it. It will take a lot of time before the customer can benefit from it.

Dominique Vosters:

A second test, I think, as we discussed in the beginning of it redundancy, resilience. And then what with input failures? So if you have an input failure, FFmpeg will just interrupt the stream and send it back when the input is there again. Yeah, Unfortunately, players don't really like that very, very well. So therefore it's important that you build a layer around it that you can also cover these use cases, which is harder with open source. I'm not saying it's undoable, but it's pretty hard, and that's why we try to avoid using open source components as much as possible. And then in our main product lines you won't find these, but in the past we have done for proof of concept. Sometimes we used it.

Alexander Leschinsky:

Yeah, on our side it's a bit different because, as a systems integrator, we have to work a lot more with what we find and what we ask for, and so on our side, I would draw some lines. So, outside of the core video processing business, we use a lot of open source. We use Linux all the way down, which is one of the biggest open source projects. We use Nginx, which is open source. We are using Postgres, mysql, so a lot of open source components are an important backbone of what we do. Sometimes we have commercial support for them, sometimes we have commercial additions, but there is a lot of open source technology under the hood For the video. For the core video processing, we usually draw a line between live and VOD. So for VOD, where always when the input formats are very controlled and defined and that resonates with what you said, dominik with potentially a customer who has issues with very different sources If the source is controlled and uniform, then you can get a lot of things done with FFmpeg for video-on-demand transcoding and a lot of our customers do and it's difficult often if you don't use DRM or have some fancy stuff that you need to persuade a customer to use something else than FFmpeg because it's already working very well.

Alexander Leschinsky:

So that is something where we have. The most of what we do with FFmpeg is in the on-demand transcoding world and they're also with or without NetIn cards, so there is a good overlap. And we have been working with Ivan from Stockholm recently to work on getting the NetEnt cards ready to be used from within the SVT and core project. So that's also something that we have been working on for on-demand transcoding. For live transcoding, we simply use Avampac in a lab environment where we just have to set up something, where we have to test something, where we have to mimic something that a customer is doing with that.

Alexander Leschinsky:

But for real production work we are just using commercial software and in this case the SDK from the North guys, and one of the biggest reasons for that is we often have special needs that we need to get fulfilled very quickly and North can provide that. So it's really great to see how quickly we can adopt to a certain situation compared with what you would get from open source. And the second thing is telemetry and insight. Especially with live transcoding, so many things can go wrong, so many things can build up. There is jitter, there is latency, there is packet loss. All these things matter and have a huge impact on the end quality and you need to have full transparency on what's going on, and that's something that we cannot really achieve with FFmpeg in the same way that we can get it with other tools, including Norsk SDK.

Mark Donnigan:

Yeah, it's interesting, you know, and by by no means are we trying to say, yeah, you know, open source bad or open source good or commercial bad or commercial good. It's. You know I, I know we all would agree. You know what's really paramount, what's really critical, is that we use the right tools, and I think the right tools for the right job the right tools for the, exactly for the job.

Mark Donnigan:

And you know, I really appreciate it resonates a lot the distinction between, like, a VOD workflow versus a live workflow and invariably you know where systems fall over and I know you guys see this because we've talked about it. You know where systems fall over, you get under the hood and you say, well, you know, 90% of the issue is it's the wrong architecture, you know is the wrong set of technologies. So it wasn't, you know. But very often, looking in from the outside and there have been some recent high-profile live stream failures and people are so quick to make pronouncements as to what it was and very often it's sometimes more simple than these big things that someone might assume.

Alexander Leschinsky:

Absolutely, but some other angle on the use of open source that we are often confronted with. So I totally understand if a customer is using open source because they want to save money, it's available for free In the end. That is not always a fair comparison Because in the end, also for the open source office, somebody has to pay, somebody has to do that work, somebody's doing the engineering work yes, somebody is doing that and and somebody needs to sponsor it. And and it depends if we are looking at linux, where you have all the huge enterprises like microsoft and ibm investing in it, or you have smaller, smaller tools that are also important but where there is not that huge enterprise behind it that pays it, and then using these FFmpeg tools just to save money and not giving them anything back. That is like being a scrounger, in a way. So that is something that I feel is a bit unfair towards the open source community, whose tools are then being abused or are used for free and not honoring that. There is some work involved as well.

Mark Donnigan:

Yeah, yeah, yeah, completely, completely agree there. Well, this is a great discussion and I know that you know, if this were an actual fireside chat and you know we were sitting in some idyllic, you know beautiful place and we had a little fire going and had our drink of choice we would just keep talking and talking and have a wonderful discussion. But I think it'd be interesting to end on this. Maybe each of you can share, you know, call it some insights, call it some integration or migration tips and tricks for someone who's listening, who's saying, ok, I'm on some sort of a of a you know encoder that's been in a rack for many, many, many years, but you know they need to modernize. Maybe it's a cloud solution, maybe it's literally, you know, like an OVP, you know an online video platform. Whatever it is.

Mark Donnigan:

You know they want to migrate for better flexibility, lower cost all of the reasons that we talked about flexibility, lower cost, all of the reasons that we talked about. You know, alexander, maybe you can start. What are? You know? Just share you know, some of the things that they should be thinking about. Maybe some you know some pitfalls that you had to step around that you can help them with. Just give your insights there for those who are thinking about making a migration to, you know, to VPU, and how they can make it easier on themselves, of course, of course.

Alexander Leschinsky:

So I think the first part that is very important is that they get their requirements right, that they understand what they really want, and it's easy to mix that with marketing material that you get from other solutions and you think what I want is AWS Media Services, but that's not what you want, that's not your requirement. What is your actual requirement? And getting to the core of what the customer really needs and what they want to do is not as easy as it might sound. So the first step really is to make your minds up what do you really need? And then put a business value to to technicians and technology um, um experts and companies. They have very specific ideas what they want to see, what they would like to implement, what they have done in the lab, what they have tried themselves um in a lab environment.

Alexander Leschinsky:

But is it what the business needs in terms of long-term profit or value? That's not always the case, so so I we we often see that that the people who have the budget they um depend on the technicians and technology people to define their requirements, and there is sometimes a mismatch and if it's not cleared in the beginning, it falls on your foot and at the end of the project. So so I think that's the most important part get, get, get your requirements right, get your business value right. What, what I? What do you want to achieve? And that's something that that an educated discussion can very quickly identify in the beginning, before you even go to any technical details.

Mark Donnigan:

Yeah, yeah, yeah, I love it. That just really resonates, you know, as a proof point for those of us that have worked in, you know, around the industry for a while, especially with some of the very, very large platforms and the hyperscalers and all time have built incredibly capable, usually a small, or even a not so small group of PhDs who are really only looking after, you know, video standards, and they're, you know, into the details of their encoder, encoder selection, sometimes even building encoders, or certainly working very, very close you know, to their partners that they're licensing products from. There's a really interesting trend that I've seen over the just and recently, over the last maybe year and a half so, and that is that these groups still exist. You know, it's true, some companies are, you know, are downsizing a little bit or maybe they're reallocating. They're saying, hey, let's move some of those folks to go work in a different part of, you know, the streaming operation. But the focus of the group has shifted and, yes, they're still involved in standards and they're still kind of doing a lot of the activities you would expect from an advanced technology and coding team. In other words, a whole group of PhDs. You know who all they've dedicated their whole life and career to studying video and coding.

Mark Donnigan:

But I'm thinking of one company in particular where, all of a sudden, these PhDs are now a part of building a whole new product for this company.

Mark Donnigan:

That is very important and very strategic, and previously that would have been completely outside of their remit. You know it's like hey, you know our job is to evaluate encoders, stay up to date on latest developments of, you know, av1, av2, vvc, lcvc, like you know, run all the benchmarks, like that's our job. And the reason that I think this is really significant is that it's a it's an example of where the technologists are getting more aligned to the business, because in this particular example I'm thinking of, what they are working on is really fundamental to the business and has is going to potent, not potentially already is looking to unlock billions of dollars of value and revenue streams, and so it's aligned with the business. And so I think one of the you know, one of the tips that you really hit on there, alexander, is go back, you know, sort of step back for a moment from the technology discussion and say what is the business discussion, what's the impact, and then what are the best technology choices to drive that?

Alexander Leschinsky:

So I love it. I think it rings with Parkinson's law, the idea that any work expands to fill the amount of time that you allotted for. The same is true for the brainpower. If you throw a lot of brainpower and PhDs at the problem, they take their time. There is enough to talk about. Is there enough business value to justify that? That's a crucial question. To justify that, that's the that's a crucial question yeah, yeah, yeah, that's great.

Mark Donnigan:

Well, dominic, uh, what do you? What would you recommend?

Dominique Vosters:

no, I, I, I agree, and then I it's. It's important, uh, before, when you start on a project, that you know, uh, what the end goal should be. And and then, of course, for us it's slightly different. Alexander is more into the integration business and then putting pieces and parts together and then selecting the right parts. For us, of course, we need to know what the customer's requirements are and if we can still fill them or not, and if we need to put it on the roadmap or not. And also the business side I think it's an important one, and what we see in the in lately is a total cost of ownership becomes way more important on the business side than then, let's say, two or three years ago. Like two or three years ago, decisions were made slightly different. These days it's total cost of ownership and if we can slice down the number of hardware or resources required if it's on-prem, it's hardware, if it's in the cloud, it's resources. In the end it's power that you need. And then, if you can slice it down on the transcoding side with VPU cards, where you need much less hardware or power, I guess you can slice down the cost there. On the origin, the same Also. There we quite perform, so we can slice down a number of hardware. And then if you then take total cost of ownership, then typically we end up pretty low compared to competitors.

Dominique Vosters:

And then business is, as Alexander mentioned, an important topic where decisions are made. It's not only the technical, it's of course. You need to be able to fulfill the requirements that you must have for us, of course. And then business-wise, we try to be the as the lowest total cost of ownership. Um, software, yeah I that's software you can discuss, but uh I the less hardware we need, uh I this this will reduce the cost, uh power supply, which is sustainability. These days it's also a topic. I suppose it's a bit hand-in-hand, but still it's an important topic Less horsepower, less power, more sustainable. So yeah, I agree.

Mark Donnigan:

Amazing. Yeah, yeah, well, thank you for the insights. Well, gentlemen, as always, this was a wonderful discussion and you know, I think we'll wrap it up here. I do want to encourage all of the listeners. If you are going to be at IBC, if you're going to be in Amsterdam in September by the way, beautiful time to be in Amsterdam, by the way, beautiful time to be in Amsterdam so maybe wrap a little vacation around it.

Mark Donnigan:

If you're going to be there, make sure you come by the VPU ecosystem booth. This is the NetEnt booth. It is going to be like a party. There are eight companies that are going to be there. Gnl will be there, scalestream will be there, gnl will be there, ScaleStream will be there, and there's going to be six others that you'll be able to just see what's going on in the ecosystem. Obviously, the NetEnt team will be there. We're going to be out in force. You know, guys I don't know if you know this we actually just booked one of those holographic displays, so we're going to have, right on the corner, a life-size hologram of you know. Well, it might be me, I don't know. I have to create the video, but those always are attention grabbers because you walk along and but no, it's, it's, it's really going to be great.

Mark Donnigan:

So, um, yeah, Um, thank you, as always, for listening. We, uh, we, we love our listeners. We appreciate your support. Uh, without you, we'd be just talking to ourselves, which maybe we'd still be doing, but uh, but, uh, as always, thank um, thank you to Voices, of Video, to the audience and, with that, be safe, be well, and hopefully we'll see you all in September in Amsterdam.

Dominique Vosters:

This episode of Voices of Video is brought to you by NetInt Technologies. If you are looking for cutting-edge video encoding solutions, check out NetInt's products at netintcom.

People on this episode