
Voices of Video
Explore the inner workings of video technology with Voices of Video: Inside the Tech. This podcast gathers industry experts and innovators to examine every facet of video technology, from decoding and encoding processes to the latest advancements in hardware versus software processing and codecs. Alongside these technical insights, we dive into practical techniques, emerging trends, and industry-shaping facts that define the future of video.
Ideal for engineers, developers, and tech enthusiasts, each episode offers hands-on advice and the in-depth knowledge you need to excel in today’s fast-evolving video landscape. Join us to master the tools, technologies, and trends driving the future of digital video.
Voices of Video
From Brussels to the Cloud: When Government Video Gets Smart
A groundbreaking collaboration between G&L and the European Parliament has transformed parliamentary broadcasting with a sophisticated streaming platform unlike anything seen before. Alexander Leschinsky unveils the technical marvel his team engineered to handle an extraordinary challenge: delivering 30 live channels, each with 32 audio tracks for different language interpretations, totaling nearly 1,000 simultaneous audio streams.
The architecture combines sophisticated on-premises hardware in Brussels and Strasbourg with cloud-based processing capabilities, creating a true hybrid solution that balances security with scalability. Leschinsky walks us through how they leverage NETINT VPUs for efficient video processing alongside powerful 128-core ARM CPUs that handle the massive audio transcoding workload. This split approach creates an environmentally responsible solution that meets the Parliament's non-negotiable reliability requirements.
Security stands as a cornerstone of this implementation, with Leschinsky detailing how G&L's ISO 27001 certified practices protect the parliamentary streams while enabling flexibility for authorized users. Parliamentary staff can now easily clip and share specific moments from lengthy sessions across social media platforms, making governmental proceedings more accessible to citizens. The system's sophisticated role-based access controls and comprehensive auditing ensure accountability while maintaining operational efficiency.
Perhaps most impressive is the future-proof hybrid architecture that allows identical applications to run seamlessly across on-premises hardware and Akamai's Connected Cloud. This approach eliminates geographical limitations and provides resilience against hardware availability constraints that often plague GPU-dependent workflows. If you're wrestling with complex media processing challenges that demand security, reliability, and flexibility, this episode reveals how innovative integration of specialized technologies can deliver breakthrough solutions.
Want to learn more? Visit G&L and NETINT at NAB booth W3531 or attend the Streaming Summit, where Leschinsky will share additional insights with Akamai's Shawn Michaels and NETINT's Mark Donnigan.
Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
Voices of Video. Voices of Video. Voices of Video.
Speaker 2:Voices of Video.
Speaker 1:Well, welcome back to this super special edition of Voices of Video. Now, if you've been following us, you will know that we are building up to NAB. And don't worry, though, if you're listening to this or watching this after NAB everything that's presented in all of these episodes, including this very special one, is still applicable, so don't click away. But if you're watching this before NAB, you will definitely want to come to the NetEnt booth at NAB and check out our partners, starting with GNL, and today I'm joined with Alexander Lachinsky, who we've talked to before. Alex, it's been great to you know, speak with you in the past, and I'm happy to have you here on Voices of Videos, so welcome.
Speaker 2:Yeah, thanks for having me again. It's an honor.
Speaker 1:Yeah, absolutely. Well, you know, we of course personally have, you know, have worked together and bumped into each other for many years now. But you're doing some really interesting things with NetEnt VPUs and I know you've prepared a case study, in fact a press release. It just went out, I think yesterday, about this project right.
Speaker 1:A couple of days ago. Yeah, yesterday, it's great. So it's hot off the press. So maybe we should just start here. I am sure a lot of our listeners know who GNL is and they probably recognize your name and have heard you speak. You're quite prolific in the industry. But maybe give just a really quick intro to GNL and then let's jump into this project. Sure.
Speaker 2:So GNL acts as a systems integrator and managed service provider. We build complex media and metadata processing applications, usually with partners. In a typical project we have five to 15 partners. We integrate them and we build solutions with a unified user interface API, so that the customer gets something that actually works that you can start to use immediately, instead of just having different pieces that he has to integrate themselves.
Speaker 1:Yeah, it's important work that you guys are doing, for sure. So you know, I guess, as they say without further ado, everyone likes to hear about real-world applications for products and technologies, and I know you've got an amazing project to go over, so why don't you tell us what you built?
Speaker 2:you know to go over. So why don't you tell us, uh, what you built? Absolutely so, um, what I want to talk about is, um, a streaming platform that we built for the european parliament. It's been a tender. They have been doing something similar in the past, but they needed a new vendor, so they did a tender. We won that tender and implemented that in in a record time so a couple of months for a very complex setup. The platform is now in production since February 2025. We just now started talking about, because obviously things had to be proved that everything runs well. It does, so we can now talk about this.
Speaker 2:The main use case is they have a lot of different rooms in which they have parliamentary activities. So some are the large plenary rooms where the full sessions are taking place, but they also have working groups and smaller meetings that also can be broadcasted. So for us, it's a collection of something like 30 different virtual rooms that we connect to. They can map that to different rooms in the different buildings in the cities that they cover. So can map that to different rooms in the different buildings in the cities that they cover. So they are in Strasbourg, in France, and in Brussels, in Belgium. So we connect there with some real on-prem hardware. We do live streaming, we do clipping of VOD stuff. We forward things to social media targets in a reliable, very modular way. That was some of the requirements that the parliament had.
Speaker 1:Yeah, I like that. Reliable must not fail.
Speaker 2:Yeah, exactly Exactly. That's a pretty hard requirement. It is. It is Absolutely no negotiation on that one.
Speaker 1:No, no.
Speaker 2:And I think it's coming in a time where the European Union is forced to get more dependent on itself. So we are here in a very interesting time to provide that service. So I'm curious to see what kind of information we are going to stream in that channel.
Speaker 1:Yeah, interesting.
Speaker 2:Definitely, and it's huge for a single governmental body in terms of figures that we have here. So it's 30 live channels, different rooms they have some tens of rooms more but they are mapped to 30 virtual channels. Each comes with 32 audio tracks because they are live translating everything with real interpreters. So we get up to 32 audio tracks per channel and that means 30 channels by 32 audio tracks, roughly 960 audio tracks that we have to encode and transcode. So that's a lot. That's a real lot. So the video channels are being transcoded using NetEnt VPUs. The audio is being transcoded directly on the CPUs and peer CPUs that we are using. So we are really making use of the lots of cores that these CPUs have.
Speaker 1:Lots of ARM cores. Yeah, how many cores are in the machines that you're using?
Speaker 2:So we are using single CPU servers and each CPU has 128 cores and we are utilizing those cores. Amazing.
Speaker 1:Yeah, yeah, yeah, just as a as a quick aside, this really is a tremendous showcase for high density core count, you know, in a CPU, and of course ARM is, you know, really the only architecture that can support that, at least support it, shall we say, economically, and then in a green way, without needing a power plant to run the server.
Speaker 2:Absolutely absolutely so. The audio encoding is part of what we do in CPU. The other part is that, for some historical reasons, the signal that we get is at times still interlaced.
Speaker 1:So we have to deinterlace it.
Speaker 2:So, deinterlacing and audio encoding is something that we rely on the CPU to do. And that's a really good reason. So they're not just idling around, they're really doing some work at low power. So that's a very good combination, in combination with the acceleration that we get from the VPUs.
Speaker 1:Yeah, and it's great because when you're offloading all that video encoding and scaling, you know essentially everything needed for the video portion of transcoding. Now you effectively free up. You know not 100% of the CPU, but you know upwards of 80%. Even 85% of the CPU is just sitting there idle, you know. So if you have a big, powerful machine with a lot of cores, like an Ampere-based server that you're using, it's perfect. It's perfect for the application.
Speaker 2:Yeah, and we are just on the of of booking or of ordering some of the new super micro servers with the new mpr cpus that have even more cores, um, because we we have a real need for that and in combination we would. We would have needed much, many more servers if we didn't have the um, the asic acceleration of the of the net in vpus. That's a really game changer for us. Yeah, and this is the whole workflow. So we have an integration into some existing systems so we get event metadata about which sessions are going to be played. We have a content management system that we build especially here.
Speaker 2:That content management system receives these metadata elements and then controls two parts. One is a part of on-prem encoders and we call them AVPUs Audio Video Processing Unit. The NetEnt VPUs are part of that. So we have a couple of them on-prem in Brussels and Strasbourg and they have SCI capture cards on board so that we can get the signals. They push a single bitrate stream via SRT over to some cloud transcoders that we also operate ourselves. So it's hardware that we control and then we have these two working hand in hand. The on-prem encoders feed a 24-7 ring buffer so that nothing is lost. They feed an internal multicast network that the European Parliament is operating, while the cloud transcoders transcode to the ABR sets and then push to the CDN, to the social media streams and provide clipping and email functionality.
Speaker 2:Let me just check here through these slides quickly because they are showing the same thing. So this is the actual distribution of the data centers in the different cities. So, as you might know, the European Parliament has offices in Brussels and Strasbourg, also for historical and contractual reasons. We have one data center in Strasbourg that the parliament is providing. We have two in Brussels, in Strasbourg that the parliament is providing, we have two in Brussels and a different number of SDI inputs and it's all SDI, also for some security reasons, so that we do not interface with their internal networking. It's completely secure and separated and then we control everything from actually Akamai cloud servers in Frankfurt and in Amsterdam. So it's a redundant multi-site cloud management platform that we built on Akamai's infrastructure and then those communicate with our on-prem AVPUs that do the ABR encoding and we operate them in Frankfurt and in Dusseldorf also with enough distance between them to be um safe in terms of natural catastrophes yeah yeah, actually, and they push them to the global cdn, which is akamai again in this case, and to social media, and social media is like like facebook and linkedin, and and x sure, sure yeah, and the the interesting part is
Speaker 2:that they have some really long meetings. The plenary session can take many, many hours so they wanted to have something that is easy for a special set of operators that can just pick out what they need, because all the members of parliament, they have their own staff and they want to either live stream or download only what their member of parliament is saying or is just presenting. So the setup here is you take the live stream and then you have a group of online social media editors that have just the right to access these streams for forwarding them to social media. So in a long live meeting you could just pick out one speaker in Spanish and then push it to Facebook, and then at another time, another member of parliament is asking his team to push the German stream to Facebook over another timeframe, and they can also clip this and download this and send emails so that they have a real flexible system here no manual intervention, and they can do everything while the stream is still going on, while the session is being broadcasted.
Speaker 1:Incredible. Now can I ask how much of this did you bring net new functionality and how much of it that you were improving? You know, maybe upgrading quality, you were, you know, improving reliability. In other words, you know the functionality was similar, maybe even the same, but you know it was an upgrade.
Speaker 2:Just curious, like yeah, so what stayed the same exactly is the way in which the parliament in sense metadata information to us, so they have their own content management system that is planning all these sessions and so we get some json feeds that transport this information which session is taking place in which room. How long will it roughly take it can change over time, of course, because things can run longer. This has all been the same, and we also have been asked to to take all the legacy data that they have created over the last 10 years so that they can still present it. So we had to take it over. What was completely new is the way in which we built the whole system for getting the information, for transcoding it, for sending it. So we were responsible for bringing all the encoders, all the SDI equipment, all the production switches. Here are some actual pictures of it.
Speaker 1:We brought some Blackmagic video hubs here, a lot of them.
Speaker 2:We have a lot of SDI equipment here and we're using some nice DeltaCast cards that have a high density. On the right side you see the SuperMicro servers that are running the NetEnt cards. This has all been brand new and the important part that was not there before was the high degree of automation and the security level and the flexibility and the scalability. So to being able to really target these 200 concurrent social media streams is something that was difficult before.
Speaker 1:We have enough capacity to do that now, so it's more about automation and reliability and, yeah, was there also the benefit of improved maybe resolution or video quality, or was that in a pretty good state previously?
Speaker 2:yeah, so I think we we did uh provide an video quality. However, that was not the main target in the first phase in which we are still, because you have to mention it's in the end it's talking people talking hats.
Speaker 2:So so yeah and it's more about the content that is being, it's that is being generated, not so much the video quality. So that was not the main focus of it, but it is and it will be over time, because we are in discussions with them to introduce a higher compression or lower the bit rate so that people can view the same quality with even over worse conditions of their networking equipment or something like that. So there are going to be optimizations. But the focus was not on video quality. It should have the requirement was. It does not. It must not be worse than it was.
Speaker 1:Yeah, it obviously can be yeah.
Speaker 2:Yeah, but that was that was good enough, so it's more. But the challenge is more about preparing these 32 languages, providing a user interface that doesn't overwhelm you for that. So that were some of the areas that we had to look into. Amazing, and also security, is a huge problem, so we also it's not just a video transcoding setup that we do, and that's also part of what GNL can provide. We are ISO 27001 certified and we know very well the security setup that we have and the security products that akamai, our main partner for content delivery, has. So we have a lot of services that are around zero trust and security around that, so that we really can be as safe as you can probably be that this is a secure service that is really reliable and where we can detect if we are in any way attacked by a bad person, and I think so far it looks very promising.
Speaker 1:Excellent.
Speaker 2:The same is true. We gather a lot of information here. It's not like this is the new Netflix. It's not the numbers that you have with the huge OTT platforms. It's a selected audience that watches this content, but still it's very important to see which parts are interesting. So the need for analytics and for understanding what's going wrong and what's going well is very high as well at the European Parliament. So we gather all this information. We have a lot of dashboards and of alerts and aggregations, so that's very important to the Parliament and it's also part of the expertise that we bring to the table.
Speaker 1:Yeah, Now, is this your data solution or are you integrating a commercial? Can you comment about?
Speaker 2:Yeah, of course I can. So how we do this? Is we so? On the video player side in this project we're using Bitmovin and Bitmovin Analytics. Bitmovin Analytics then feeds its data into our own hosted hydraulics database. So we are working with hydraulics for hosting database. It's a self-hosted hydraulics version, so we are currently not using Akamai's Traffic Peak, but we are running our hydraulics on Linode on the Akamai Connected Cloud, and then we have, as a partner of Grafana, we are using Grafana Enterprise for the visualization. So it's a combination of partner products but with a very specific general twist here and we do all the integration and combination.
Speaker 1:That's great. That's great.
Speaker 2:Yeah, and that also brings me over to how we integrate with now, how we bridge between on-prem and cloud, so I can also talk a bit about that if you'd like. Yes, please. Yeah. So you've seen that we already talked about using on-prem encoders with NetInCards and that we have cloud interfaces for the whole management plane.
Speaker 2:But the interesting part is that our partner, akamai, is now also providing virtual machines with VPUs with NetIn cards on it, and that allows us to bring this to a whole new level. And what we are currently working with is so we have on the left lower side, we have our single server clusters, on-prem servers with Ampere CPUs and NetIn cards, and we use Kubernetes for everything just to run our applications on it. And so in the past we just had the Akamai Connected Cloud Management on the top of this slide here. It was communicating with our own on-prem servers, but now we can also use the Akamai CloudWorkers and that means that we can use the same application that does the heavy load lifting for audio and video transcoding and for managing all the audio channels and the subtitling and everything else, and we can run it on-prem or we could run it on a cloud service, and that gives us a whole new level of options.
Speaker 1:I'm curious about this. And in your typical project I'm going to guess you know, being almost always a government, especially something like the European Parliament there's some very good reasons for security, for example, why there's probably on-prem you know, on-premises equipment, but so you know. So there's some cases whether you could run it in the cloud or run it you know somewhere else. You know you need to have a box sitting, you know, in in a building somewhere. But for your typical projects let's say, media and entertainment, media companies, et cetera does this open up the Akamai Connected Cloud, open up the ability to effectively eliminate on-premise and be able to run it all in the cloud, or what are you seeing in terms of architecture that this is going to open up for you?
Speaker 2:Yeah. So there are some use cases where we just get a high-quality, production-ready broadcast signal via SDI or 2110, and that does often not really make sense to push through the internet to some cloud instance. You need a box sitting in a building or in a truck or production center. Especially for those 24-7 operations where it's not only a truck, it's something that's permanently there.
Speaker 1:Yeah, that's right.
Speaker 2:And if you need that box, then this box can do also some other things. Then why would you under-equip that box If it's already there and you have to pay? For it then pay a little bit more and then it can do things that you would otherwise do in the cloud only. So I think there are a lot of use cases, especially around the source signals, that will stick to on-prem for a very long time.
Speaker 2:However, there are others where it absolutely makes sense to introduce a cloud and for us it's more like we love to have the whole management system where the user logs in, where the databases are running, to have that in a public cloud service.
Speaker 2:We are mainly working with Akamai Connected Cloud, but not exclusively, and we usually just put an Akamai web application and DDoS product in front of that so that nobody directly connects to our services. But we only go through Akamai and then we have that cluster communicating to the actual workers that do the transcoding and the media processing, to the actual workers that do the transcoding and the media processing, and then we usually use the on-prem workers whenever we have signals that are too big to transport to the cloud or when it's just a commercial question. Because if you look at the invest and if you look into what cloud services can cost and how easily they can scale your budget up beyond what you have planned for, we and our customers love to have some fixed costs baseline infrastructure.
Speaker 1:That you know.
Speaker 2:Okay, this is my best for this year and it covers a huge percentage of what we do and you can have the cloud for everything that is peaky, where you need it just on a weekend or just for a big sports event, or where you just need some overload capacity or where you have some specific use cases that you don't cover with your fixed infrastructure. Our infrastructure is based in Düsseldorf and Frankfurt. If we have customers who want to use that in the US or somewhere else, we wouldn't be able to use that local German on-prem infrastructure.
Speaker 1:That's right.
Speaker 2:And it's a no-brainer to just use now the Akamai To use Akamai?
Speaker 1:Yeah, amazing. Yeah, thank you for explaining that, because we're really excited with our partnership with Akamai, for obvious reasons. You know it started over mutual customers, including GNL being a very key one, and you know, at some point, you know you kind of say, wait a second. You know, if this customer is using NetIn and they're using Akamai, why don't we come together? You know, let's make it easier for them. But what's really interesting is this whole concept of, you know, flexing.
Speaker 1:You know the hybrid cloud is not a new concept at all. It's been talked about for years and you know vendors are always, you know, touting their hybrid solutions. Vendors are always touting their hybrid solutions and for the most part that's been real and there's various forms that probably some work better than others, but it breaks down. This hybrid breaks down when you want to use call it hardware acceleration and, by the way, I would even include GPUs in that. You know so anytime you've got some specialized hardware that your solution are quote everywhere right, but it's not true that they are everywhere. They're not in every machine and, more importantly, the GPU that maybe I integrated or I built my system around might only be in a couple of regions, even in a gigantic network like AWS, I mean it's possible, right. And so right there I've got trade-offs and I go, oh okay, you know what do I do. It's true, I can mostly flex into the cloud, but, you know, and so there were these gaps.
Speaker 1:Now, you know, with Akamai and the connected cloud, adopting VPUs, and because we are up until this point, you know, we have been really exclusively installed in private data centers or private clouds or on-premises, because that's how our customers used it and, frankly, that's what was available. You know you had to kind of run it yourself if you were going to use a VPU. Now, though, in you know they're rolling this out, so it's not available on day one, but you know, across the Akamai network, you're going to be able to access VPUs. And so, exactly to your point about you have your own data centers. And so, exactly to your point about you have your own data centers, and I know a lot of your business is concentrated in Germany and Western Europe, and so maybe you can serve a lot of that business even from there. But what do you do in the US? What do you do in Southeast Asia? What do you do in Latin America?
Speaker 2:Exactly.
Speaker 1:Or even other parts of Europe where maybe there would be good reasons, you know.
Speaker 2:So yeah, and so we were very happy when we could test the VPUs in Akamai's virtual machines.
Speaker 1:I know you were. I remember the smile on your face when you learned that they were going to be available.
Speaker 2:Our tests already have run on US West Coast and in Germany and in Asia, so it's good to have that power distributed over at least some of the Akamai locations. And we also see that with the GPU business, so we can use GPUs. We use them for some use cases for whisper speech and text translation.
Speaker 2:Yeah, that's right, and we can use them also for video encoding because the SDK we are using, the North SDK from Ideas, they can also support GPUs. But what we hear and see in the market is that, as GPUs can also be used for all things AI, they are in high demand currently.
Speaker 2:And so we hear that the probability for you to to have a certain um um nvidia instance not being available is much higher than for a normal instance, without a gpu, so that that also if you, if, if there is a high season for live streaming, the risk is higher if you, if you depend on gpus that just. Aws doesn't have them available in that region, that you need them.
Speaker 1:Yeah, exactly, even if they're. Yeah, and that's a very good point too. Yeah, and I'm not even trying to, you know, make the case GPU versus VPU. I was, but you raise a very good point is that you know, even if a GPU is physically available, it's very likely not available because it's already booked. Doing a whole bunch of model training for some large new foundational model or inferencing or whatever it might be used for so yeah, cool, yeah.
Speaker 2:And that also brings me to our role in the whole NetApp partner universe. So, especially at NAB, we will be at your booth, so we're really looking forward to that.
Speaker 1:We're excited to have you.
Speaker 2:Yeah, and we know that there will be other partners and we are not competing with them. We are cooperating with them. So it's a very good ecosystem that we are in and we are really cooperating with them. So it's a it's a very good, um uh, ecosystem that we are in and we are really really proud to be part of that and have been part of that very early on. So, um, thanks for the opportunity for that absolutely and so what we?
Speaker 2:what we see is um, this is just yeah, again, the the um part of where we, where we'll be, booth W3531. So for everyone who is at NAB, we should definitely be 3531.
Speaker 1:That's the booth number.
Speaker 2:That's the booth to be at. So the partners, the NetIn partners that we work most with, are Skelstream, ampere Norsk and Akamai, and so for us how this works is that we see that those are. We got those as partners who are providing pieces, necessary pieces of technology, infrastructure or CPUs or the actual cards that can be used for ASIC accelerated transcoding. But there are some things lacking that people that our customers usually need, and that starts with we unify everything with Kubernetes. We can switch between on-prem and multi-cloud. As we already discussed, we mainly use Akamai, but we can use other CDNs and other cloud providers as well and have done in the past. And then we add things that also a lot of enterprise customers that we work with need. They need single sign-on, they need ISO 2701 IT security. They want that the systems that we build are pen-tested and are really a safe space for their application and for the data.
Speaker 2:We can support role-based access control. So if we create an application, it's not just one login and you can start and stop a stream or something like that. We have different roles for editors, we have different roles for administrators, we have the social media operators, so we can very granularly open up functionality and we have an audit log so that you can see what's going on. So it's a lot of security around that and integration into your usual enterprise environment so that, in the end, we have a scalable and an enterprise ready solution that integrates all these partners, because what we see is that customers need results and not just raw tech. That's the main message behind it. So we integrate all this stuff and we make it into something that, if we hand it over to you and you've accepted it and you've tested it, you can just start working with it. It's a complete solution.
Speaker 1:Amazing, Amazing. It's a great presentation and I also want to point out to the listeners that if you're at NAB, make sure you sign up for the streaming summit, because Alexander, along with Shawn Michaels from Akamai and me from NetEnt, will be speaking and I think, Alex, you're going to be kind of covering a little bit of a version of this, and then Akamai is going to be contributing and then we're going to have a very nice sort of a moderated panel discussion and we'll be taking questions from the audience really interested in what GNL is built in. This I'm going to use the word true hybrid solution, where really you can truly flex from on-premises hardware into the Akamai connected cloud. You can do it anywhere in the world. It's just a really, really flexible architecture, Absolutely just a really, really flexible architecture.
Speaker 2:So I think it's absolutely and and um, we intend to bring a lot more of of hard facts and data to to nb so that we can share some some some load figures and some some compression settings. So what are the real savings?
Speaker 1:energy consumption, because you know you didn't even touch on that here because we just had a short time and I even told you. I said I know there's so much we could talk about, but just do the super high level flyover. But the energy consumption story and the efficiency there, first of all, europe is, I think it's fair to say, doesn't have enough power. Although I would say that's true in the US, I think that's true in almost any part of the world right now, power is the gold, but you know so we also can't be building these amazing video production and video streaming and video distribution workflows and then just expect that power is unlimited. You know that's not the case.
Speaker 2:We need some more 10 years until nuclear fusion is really a real thing, and then we can have a different discussion at that point, yeah that's right.
Speaker 1:That's right, yeah, yeah. Well, it is interesting how, once ChatGPT launched and all of a sudden I started hearing more about nuclear power plants and debates around energy, you know how we're going to produce energy, Absolutely yeah well, that's good.
Speaker 1:Well, alexander, thank you so much for joining us on Voices of Video. And you know we're just, oh, I guess, about 30 days away from the start of the show. It's going to be a tremendous NAB, a lot of enthusiasm. You know we're feeling it just with meeting requests and customers that are, you know, that are telling us. You know, we're coming to really talk about real stuff, make decisions and start moving. You know so there's great things happening.
Speaker 2:I'm really looking forward to it. I can't wait Exactly.
Speaker 1:Yeah, it's going to be, and it's fun too.
Speaker 2:It's Las Vegas, you know it is, yeah, yeah, very different from from here in Germany.
Speaker 1:Yeah, yes, yeah, so, yeah, so, all right, well, good, well, thank you again for joining us and for the listeners. Thank you, as always, for supporting Voices of Video and if you're going to be at NAB, make sure you come by the NetInt booth, make sure you talk to Alexander and the team the GNL team and we hope to see you soon. All right, have a great day, everyone. This episode of Voices of Video is brought to you by NetInt Technologies. If you are looking for cutting-edge video encoding solutions, check out NetInt's products at netintcom.