Voices of Video

From Campbell to Codensity: A Practical Hero’s Journey in Video Encoding

NETINT Technologies Season 3 Episode 29

What if a hardware roadmap could read like a myth?
We take Joseph Campbell’s Hero’s Journey and map it to a concrete engineering pivot - from life in the ordinary world of CPU/GPU encoding to a high-density, power-efficient future with NETINT’s Codensity G5-based VPUs. We talk through the initial reluctance to touch specialized hardware, the mentors and SDKs that changed our minds, and the exact moment we crossed the threshold by installing drivers, testing real inputs, and pushing the cards into live workflows.

From there, the plot thickens: allies like Norsk Video, Supermicro, Gigabyte, and Akamai helped us scale, while enemies showed up as driver quirks, 4:2:0 vs. 4:2:2 trade-offs, and new mental models that don’t behave like CPUs or GPUs.
The dragon’s den wasn’t a competitor - it was public procurement. Tenders forced us to design for variability, not one-size-fits-all. That pressure shaped the treasure we brought back: four NETINT form factors that express the same transcoding engine in different ways.

We break down where each fits:

·       PCIe T1A - broad compatibility

·       T2A - dual-ASIC throughput

·       U.2 T1U - extreme density when vendor policies allow

·       M.2 T1M - tiny blade for edge and contribution with PoE, low power, and surprising capacity

We share the software split that actually works in production: NORSK for live and live-to-file pipelines, FFmpeg for VOD encoding - plus how a composable media architecture runs both on-prem and in the cloud. With Akamai’s NETINT-enabled compute options, hybrid deployments become practical, not aspirational.

The story lands with a proof point: G&L deploying at scale for the European Parliament - 30 concurrent sessions, 32 audio tracks each - across Brussels, Strasbourg, and German cloud regions, with Linode as the control plane.

DOWNLOAD PRESENTATION: https://info.netint.com/hubfs/downloads/IBC25-GnL-Hero-with-a-thousand-faces.pdf

If you’re weighing density, power budgets, or vendor constraints, this journey offers a clear map, hard-won lessons, and a toolkit you can adapt. Subscribe, share with your team, and leave a review - what’s your dragon, and which form factor would you choose first?

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

Voices of Video:

Voices of video. Voices of video. The voices of video. Voices of video.

Alexander Leschinsky:

Yeah, hi everybody. Thanks for being here. So yeah, the hero with a thousand faces. Thanks for that extra applause here. So the hero with a thousand faces, my name is Alexander Lyshinsky. I'm the chief storyteller at GNL. GNL is a systems integrator and managed service provider. So what I'm telling you today about is the 12 steps of a hero's journey. This was a book written by Joseph Campbell in the 1949 and 9, and it had a huge impact on storytelling in Hollywood and beyond that. So what he did is he found the 12 prototypical steps of a hero, and that makes it perfectly clear that he did not come from Cologne, our home city, because in Cologne it would have been 11, but that's a different story that we tell a different time. What are those 12 steps of the prototypical hero? So the first step is you start in the ordinary world. You're still no hero yet. You're just an usual guy. But then something happens which causes you to adventure. Something comes from the outside and challenges your current views and situation. You might not want to accept that. So you say, I don't want to bring that ring to Mordor. I I this I feel uncomfortable with that. But with the help of the mentor, you accept your new role and you step into something else that you have not done before. And after you do that, at some point you cross the threshold. So you cross from your ordinary world where you have started with into some magical realm, into some magical world where your real adventures will start. In that magical world, you meet allies that you help get help from that you work with, you have some enemies and you have some tests that make you a real hero. So this phase is the hero gym that makes you stronger and prepares you for anything that will come. So at some point in your journey, you will then approach the dragon's den. So things are tightening up, things are getting more excited, you're preparing your last journey and your the last step of your journey. And at that point, you meet the ordeal. So this is the end boss that you have to fight. And it's a huge fight, and it takes a lot of effort on your side. It might go wrong, it might go well, but that's the high point of this 12-step story. After that, usually you win, and then you can grab your reward because that everything has to have a reward, otherwise, why would you make it in the first place? So seizing this treasure is the ninth step in this 12-step uh story. After that, you have to go back home, and this is usually not an easy ride. There are some things that still go can go wrong, there are some obstacles that you have to overcome. But being a hero, it's much easier for you than the journey to that you started with. So the journey back. After that, there comes something like you die, you're resurrected, you're in some way transcended. So you're not the same person as before. Something happens to you, and this ends with returning the boon. Everybody in the ordinary world now benefits from what you have brought back from that magical country. Those are the 12 steps of Joseph Campbell. Back to IBC 2025. What does that mean? And what does that have to do with Net and GNL? So the ordinary world for us is the day-to-day engineering business. And in our start, we were just working with the typical services like Elemental Media Life, FFMPEC, Intel AMD CPUs. We had been using Nvidia for GPU encoding. That was our day-to-day business. And at some point, we had been approached by two nice guys from Ideas. And those are the founders of the company Ideas, and they said, hey, we have a software SDK for media processing that we are using a lot. And they said, we have something in under the hood that might help for you, might help you. Because it's a hardware-accelerated piece of hardware, an ASIC, called NETINT. Why don't you try that? But then came the refusal of the call, and we said, no, we want to do it like we did before. We just need CPU, we did GPU, we don't want specialized hardware, that doesn't work. So we needed to meet the mentors. And when we met the mentors and learned about the background of the story and what was in that ASIC ecosystem, it helped us to understand, okay, there is more to it, and we should reconsider what we have been doing in terms of hardware and software and GPU encoding. So crossing the threshold in our Sway boss, install the SDK and try to actually work with it. Get your hands dirty, apply this piece of hardware and see how it fits into your workflow workflow. And so we did that with the help of the NetEnd SDK and made our first steps. And on that way, we met allies, enemies, and had some lot of tests. So we had the NETINT support team that helped us when we had issues with the drivers. We had the NORSC team that provided the SDK that was working. We had hardware partners from Supermicro and Gigabyte. We had Arkamai who have been using the cards on their own services. We had some enemies. The devil is in the details. So the driver is sometimes a bit difficult to work with. You have to really know what you're doing there. We have some learned concepts that come from CPU encoding and GPU encoding. We had to change our chain of thought and how we approach things. And then we had a lot of tests like what is happening if you have a special input. You have to learn it. Does it support 422? Or now it does only 4.2.0. So some workloads would not be a good fit. These are things that we learned. Those were the tests that made us more into a VPU hero. And then we were approaching the dragon stand. And the dragon stand for us as a systems integration managed service provider as public procurement. So tenders are the enemy, the end boss that you have to fight. So the ordeal is customers asking for services and tenders, and we every customer comes with new requirements, and it's difficult to find a solution that fits every customer. So as we learned again and again, it's good that you have something else than just a single solution. So for us, when we fight this and when we learned how to integrate the different VPUs into our ecosystem, we got a reward, and that is four different form factors that we found in the dragon's den. So these are four cards that we work with, and this is practically the core of this presentation here. So this is the treasure. And what do we have here? The inner sanctum is an ASIC chip, a single chip, the Codensity G5 chip that we have heard a lot about. This chip comes with the actual transcoding, um, supports the multiple inputs, supports the decoding, has an AI chip on board, and can do the 8K resolution. Then this chip is being installed on different form factors of the uh actual NetN card and the drivers. So it's not thousand faces, it's four faces actually. So but it the title is better with thousand. So this is a PCI Express card where usually your extension cards go into. It has one single ASIC on it, so it's one codensity G5. There is a sister that's the two T2A, which comes with two co-density chips, so it's just the bigger model, the the the double-sized model. The next one is the T1U, which goes into the U.2 slot, where usually your NVMe drives would go into if you have a run rec unit, two rec unit server. So you can put a lot of them into a single server. And the last one is the quadra T1M, which is an M2 model. M2 is usually where your boot drive is attached to. So this is a different form factor, a smaller one. Which of these quadras now for which use case? So if you want maximum density and uh if you have a very permissive server vendor that allows you to install individual drives into these new two slots, the T1U is the model to go to. So you can just put it into a slider, slide it into your server, and then you can get up to 12 of these devices into a single REC unit. Each of them 32 1080p encoding capacity times 10 or times 12 gives you well beyond 300 streams that you can encode on a single server. Um, permissive server means you need a server vendor that allows you to install hardware into these slots that you have not bought from them. Not all of the vendors support that. That's the crucial description here, the crucial discoon. If you have a server vendor, like for instance, HPE or Dell, that do not allow you that. You don't get these sliders on the market. You can't buy them. You have to buy everything from Dell, everything from HPE to keep the support. In that case, you can also use the T1A or the T2A and put them in the PCI Express slots. Actually, as I learned today, there are some servers that are meant to host a lot of GPU drives that have 16 PCI slots. Then you can put 16 of those into one of these machines. This is also an option. But for us, this is the card that works in most servers of any brand, and that most of the customers that get test specimens from us are using. And the last one is a tiny enclosure. This is a T1M. So this is the whole quadra uh encoding unit, the ASIC on a on an M2 chip. And this beast here is a rec blade, it's a compute blade from Uptime, uh, a tiny computer. This is below this blue stuff here is the CPU. It's a Raspberry Pi 4-core CM4 compute module. And the size of it is amazing. So this is the whole computer here. I will hand this over to somebody else and continue. Okay. So what this is is a um the compute server itself is a Raspberry Pi, so you can put Linux on it. The whole rec uh the whole blade is powered by power over Ethernet. Thank you. And so this means with this part you just power it with power over Ethernet. You can put 20 of those in one REC unit, and each of them um has the capacity to encode 32 10 ADP streams in a G V C or in AV1. Yes, sorry. How many gigabits? So this comes with one gigabit. Yeah, which is not ideal. I I completely agree. So it this is not for your 2110 workflow. But if you have in the if you need small enclosures and you are in the field, then this is a perfect idea because the whole power consumption of this series below 20 watt 20 watt. So it's really tiny. Power over Ethernet makes it very convenient. And with one gigabit, you can do a lot of field streaming. So as a contribution encoder, this is perfect. So or if you have a distribution encoder but you you're comfortable with these bit rates, this is perfectly fine. It comes with 3D printed enclosures, so it's a nice thing. And I can give it to you so just that you can have a touch. Thank you. Do I have to have some around the audience? You can, you can. You can. Okay. Okay, and bonus ag. So all these things are also available in the cloud. And so far it's only Arcamai. So Arcamai has some compute plans where you have a virtual machine with a net end card in it. You can also use it in the cloud, which fits perfectly well to how we create our software. So our composable media processing architecture usually can run both on-prem and in the cloud. And if we have a situation where we have a net end card, we can use that, but we can also just use CPU or GPU. So this fits very well to what we're usually doing. The road back. So the labors of the mountains lie behind us. Before us stretches the labors of the planes from one of my favorite poems. The labors of the plains, that is, you have mastered the most problematic difficulties, but now you have to deploy it in your day-to-day business. And then other problems start to arise. So for us, it's again working on the driver implementation details, choosing the right media processing framework. So we do a lot of work with FFmpack, but we do most of our work in live streaming with NOSC. You can also use the low-level C interface for these cards. So what we found out the hard way by trying and working with it, for anything that is live processing or live to what, we are using the NOSC SDK from ideas. That's what all our live processing is based on. For on-demand processing, when we just do on-demand encoding, we are using FFM pack. So we combine these two, and that is for us the perfect combination. But the good choice is you have others that you can try and can to see what fits best in your case. So GNL was reborn with new hardware skills, so that made us more appropriate for rendering out some tasks. Return with the boon. We deployed these cards. The first project we used them in a mass scale was a project that we did for the European Parliament. So we won a tender where we had to do all the live streaming for the European Parliament. They have up to 30 concurrent sessions that they are streaming. Each session is with 32 audio channels, so it's a lot of audio, a lot of video encoding that has to be done. And so it's a mixture of on-prem infrastructure that we have in Brussels and Strasbourg. And it's also a mixture of cloud services, some that we operate ourselves in Düsseldorf and Frankfurt in Germany. But also we are using the Arkhamai Linode Cloud as a control plane. So it's a nice project that we where we introduced this for the first time on a large scale. That's it. Thank you very much. And um, if you want to talk more, I'm here at the booth. Thank you.

Voices of Video:

This episode of Voices of Video is brought to you by NetInt Technologies. If you are looking for cutting edge video encoding solutions, check out NetInts products at netint.com.