His comment immediately after describes exactly what happened:
> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.
Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.
Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.
From ZiffDavis article:
> QuickTime 6 media player and QuickTime Broadcaster, a free application that aims to simplify using MPEG-4 in live video feeds over the Net.
At the time, the mocking was well deserved. I remember downloading trailers for moves over my dial-up connection. Took the entire night for 3 minutes of video. Can’t imagine paying $5k for that privilege.
Today though, the mocking doesn’t make sense and is confusing. I haven’t ever owned a TV.
By 99 it wasn't that bad. I remember screaming along with V.92 56k modems. Futurama episodes were about 50mb encoded as RealVideo and took a mere two and a half hours to download o.0
(and it really was v.92; I still have the double-bong towards the end of the handshake emblazoned in my memory)
Well back then there was a huge difference in the Internet experience between people at universities and other places with T1s and other fast connections, and everyone else on dial-up. There was a lot of full-length video downloading at universities by 2000.
But even on dial-up I seem to remember realplayer and other UDP dumps being pretty popular around this time.
Picking 300MB as a ridiculous amount of data to download dates that nicely without needing to look at the article header.
Though using the codecs and hardware of that time I doubt the quality at even that size would be great. Compare an old 349MB (sized to fit two on a CD-R/-RW, likely 480p though smaller wasn't uncommon) cap of a Stargate episode picked up in the early/mid 20XXs to a similarly sized file compressed using h265 or even h264 on modern hardware.
I appreciate the usage of SG-1 as an example, as I definitely still have several seasons of SG-1 episodes of that size floating around old hard drives somewhere. XVID, of course.
I remember when YouTube first appeared and my thought was "This is a really nice service. It's going to be a shame in a couple of years when it runs out of VC money and shuts down."
I also remember when they went through and re-encoded all of the videos so they could play on the original model iPhone.
As someone who hasn't had any exposure to the human stories behind mpeg before, it feels to me like it's been a force for evil since long before 2020. Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken.
Possibly, nothing. Codec development is slow and expensive. Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy. Plus free codecs were often built by acquiring companies that had previously been using IP licensing as a business model rather than from-scratch development.
I avoided a career in codecs after spending about a year in college learning about them. The patent minefield meant I couldn't meaningfully build incremental improvements on what existed, and the idea of dilligently dancing around existing patents and then releasing something which intentionally lacked state-of-the-art ideas wasn't compelling.
Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.
Well, a career in codec development means you'd have done it as a job, and so you'd have been angling for a job at the kind of places that enter into the patent pools and contribute to the standards.
I don’t know about you, but I became a software engineer to write code for myself and my own interests, not to get a job where all of my labor will be vacuumed up and exploited to maximize anonymous shareholder value.
That's all great and noble, but at the end of the day it's about who has the resources. If you can get the necessary resources yourself and have complete control over their allocation, congratulations you won the jackpot of life. Plenty of people, some of whom are smarter and better than you, tried to do the same and failed due to reasons beyond their control. Try to remain a good person and not waste the opportunity if you ever get to that stage.
For the remaining 99.99% of us, we have to negotiate for resources as best we can. That typically means maximizing shareholder value in exchange for a cut of the profits. Not all your labor needs to be vacuumed up, I make enough to support my family, live a relatively safe and comfortable life with some minor luxuries and likely a secure retirement. Better deal than most people get today.
Why are you arguing so hard against someone that simply stated “I was interested in pursuing this topic as a career when I was in college but then I learned more about the field and decided to pursue something else”?
Different posters, Taek made the post you're referring to, I'm responding to voakbasda.
Regardless, why are you white-knighting for him? He made a moral argument about career choice, and I responded to said argument as someone who took the other side. This is a discussion board, we discuss things.
More codec development work is done outside of patent-centric organizations by a significant margin. Just like any other domain technological/communication standard the most significant impetuous comes from the drive to make superior products.
Work inside patent driven development groups also suffers substantial complexity bloat because there is a huge incentive for each participant to get a patentable component into the standard in order to benefit from cross-licensing. Often these 'improvements' are insignificant or even a net loss (the cost of the bitstream to signal them on is greater than their improvement over any credible collection of material).
Software patents aren't an issue in much of the world; the reason I thought there wasn't much of a career in codec development was that it was obvious that it needed to move down into custom ASICs to be power-efficient, at which point you can no longer develop new ones until people replace all their hardware.
By the time software is robust enough to make it worth while to be placed into hardware, it's pretty damn efficient. For something like ASICs, you could at least upgrade the firmware with new code, but what about Apple's chips that do the decoding? Can they be upgraded, or does that mean needing to wait for the M++ chip?
Sometimes there are hybrid coders that can use some of the resources on the chip and some shader code to handle new codecs or codec features after the fact, but you pay a power and performance penalty to use these.
Software patents aren't an issue in most of the world. Codecs however are used all over the world. No one is going to use a codec that is illegal to use in the US and EU.
IP law, especially defence against submarine patents, makes codec development expensive.
In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.
However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.
That's unnecessarily harsh. Patent pools exist to promote collaboration in a world with aggressive IP legislation, they are an answer to a specific environment and they incentivize participants to share their IP at a reasonable price to third parties. The incentive being that you will be left out of the pool, the other members will work around your patents while not licensing their own patents to you, so your own IP is now worthless since you can't work around theirs.
As long as IP law continues in the same form, the alternative to that is completely closed agreements among major companies that will push their own proprietary formats and aggressively enforce their patents.
The fair world where everyone is free to create a new thing, improve upon the frontier codecs, and get a fair reward for their efforts, is simply a fantasy without patent law reform. In the current geopolitical climate, it's very very unlikely for nations where these developments traditionally happened, such as US and western Europe, to weaken their IP laws.
>> That's unnecessarily harsh. Patent pools exist to promote collaboration in a world with aggressive IP legislation, they are an answer to a specific environment and they incentivize participants to share their IP at a reasonable price to third parties.
You can say that, but this discussion is in response to the guy who started MPEG and later shut it down. I don't think he'd say its harsh.
They actually messed up the basic concept of a patent pool, and that is the key to their failure.
They didn't get people to agree on terms up front, they made the final codec with interlocking patents embedded from hundreds of parties and made no attempt to avoid random outsider's patents and then once it was done tried to come to a licence agreement when every minor patent holder had an effective veto on the resulting pool. That's how you end up with multiple pools plus people who own patents and aren't members of any of the pools. It's ridiculous.
My minor conspiracy theory is that if you did it right, then you'd basically end up with something close to open source codecs as that's the best overall outcome.
Everyone benefits from only putting in freely available ideas. So if you want to gouge people with your patents you need to mess this up and "accidentally" create a patent mess.
IP law and the need for extremely smart people with a rare set of narrow skills. It's not like codec development magically happens for free if you ignore patents.
The point is, if there had been no incentives to develop codecs, there would have been no MPEG. Other people would have stepped into the void and sometimes did, e.g. RealVideo, but without legal IP protection the codecs would just have been entirely undocumented and heavily obfuscated, relying on tamper-proofed ASICs much faster.
You continue to make the same unsubstantiated claims about codecs being hard and expensive. These same tropes were said about every other field, and even if true, we have tens of thousands of folks that would like to participate, but are locked out due to broken IP law.
The firewall of patents exist precisely because digital video is a way to shakedown the route media would have to travel to get to the end user.
Codecs are not, "harder than" compilers, yet the field of compilers was blown completely open by GCC. Capital didn't see the market opportunity because there wasn't the same possibility of being a gatekeeper for so much attention and money.
The patents aren't because it is difficult, the patents are there because they can extract money from the revenue streams.
Codecs not harder than compilers? Sounds like an unsubstantiated claim!
Modern video codecs are harder than compilers. You have to have good ASIC development expertise to do them right, for example, which you don't need for compilers. It's totally feasible for a single company to develop a leading edge compiler whereas you don't see that in video codecs, historically they've been collaborations.
(I've worked on both codecs and compilers. You may be underestimating the difficulty of implementing sound optimizers).
Hardware vendors don't benefit from the patent pools. They usually get nothing from them, and are burdened by having to pass per-unit licensing costs on to their customers.
It's true that designing an ASIC-friendly codec needs special considerations, and benefits from close collaboration with hardware vendors, but it's not magic. The general constraints are well-known to codec designers (in open-source too). The commercial incentives for collaboration are already there — HW vendors will profit from selling the chipsets or licensing the HW design.
The patent situation is completely broken. The commercial codecs "invent" coding features of dubious utility, mostly unnecessary tweaks on old stuff, because everyone wants to have their patent in the pool. It ends up being a political game, because the engineering goal is to make the simplest most effective codec, but the financial incentive is to approve everyone's patented add-ons regardless of whether they're worth the complexity or not.
Meanwhile everything that isn't explicitly covered by a patent needs to be proven to be 20 years old, and this limits MPEG too. Otherwise nobody can prove that there won't be any submarine patent that could be used to set up a competing patent pool and extort MPEG's customers.
So our latest-and-greatest codecs are built on 20-year-old ideas, with or without some bells and whistles added. The ASICs often don't use the bells and whistles anyway, because the extra coding features may not even be suitable for ASICs, and usually have diminishing returns (like 3x slower encode for 1% better quality/filesize ratio).
With all due respect, to say that codecs are more difficult to get right than optimizing compilers is absurd.
The only reason I can think of why you would say this is that nowadays we have good compiler infrastructure that works with many hardware architectures and it has become easy to create or modify compilers. But that's only due to the fact that it was so insanely complicated that it had to be redone from scratch to become generalizible, which led to LLVM and the subsequent direct and indirect benefits everywhere. That's the work of thousands of the smartest people over 30 years.
There is no way that a single company could develop a state of the art compiler without using an existing one. Intel had a good independent compiler and gave up because open source had become superior.
For what it's worth, look at the state of FPGA compilers. They are so difficult that every single one of them that exists is utter shit. I wish it were different.
> There is no way that a single company could develop a state of the art compiler without using an existing one. Intel had a good independent compiler and gave up because open source had become superior.
Not only can they do it but some companies have done it several times. Look at Oracle: there's HotSpot's C2 compiler, and the Graal compiler. Both state of the art, both developed by one company.
Not unique. Microsoft and Apple have built many compilers alone over their lifespan.
This whole thing is insanely subjective, but that's why I'm making fun of the "unsubstantiated claim" bit. How exactly are you meant to objectively compare this?
Software wasn't always covered by copyright, and people wrote it all the same. In fact they even sold it, just built-to-order as opposed to any kind of retail mass market. (Technically, there was no mass market for computers back then so that goes without saying.)
That argument seems to have been proven basically correct, given that a ton of open source development happens only because companies with deep pockets pay for the developers' time. Which makes perfect sense - no matter how altruistic a person is, they have to pay rent and buy food just like everyone else, and a lot of people aren't going to have time/energy to develop software for free after they get home from their 9-5.
Without IP protections that allow copyleft to exist arguably there would be no FOSS. When anything you publish can be leveraged and expropriated by Microsoft et al. without them being obligated to contribute back or even credit you, you are just an unpaid ghost engineer for big tech.
This is still the argument for software copyright. And I think it's still a pretty persuasive argument, despite the success of FLOSS. To this day, there is very little successful consumer software. Outside of browsers, Ubuntu, Libre Office, and GIMP are more or less it, at least outside certain niches. And even they are a pretty tiny compared to Windows/MacOS/iOS/Android, Office/Google Docs, or Photoshop.
The browsers are an interesting case. Neither Chrome nor Edge are really open source, despite Chromium being so, and they are both funded by advertising and marketing money from huge corporations. Safari is of course closed source. And Firefox is an increasingly tiny runner-up. So I don't know if I'd really count Chromium as a FLOSS success story.
Overall, I don't think FLOSS has had the kind of effect that many activists were going for. What has generally happened is that companies building software have realized that there is a lot of value to be found in treating FLOSS software as a kind of barter agreement between companies, where maybe Microsoft helps improve Linux for the benefit of all, but in turn it gets to use, say, Google's efforts on Chromium, and so on. The fact that other companies then get to mooch off of these big collaborations doesn't really matter compared to getting rid of the hassle of actually setting up explicit agreements with so many others.
That's great, but it's not what FLOSS activists hoped and fight for.
It's still almost impossible to have a digital life that doesn't involve significant use of proprietary software, and the vast majority of users do their computing almost exclusively through proprietary software. The fact that this proprietary software is a bit of glue on top of a bunch of FLOSS libraries possibly running on a FLOSS kernel that uses FLOSS libraries to talk to a FLOSS router doesn't really buy much actual freedom for the end users. They're still locked in to the proprietary software vendors just as much as they were in the 90s (perhaps paying with their private data instead of actual money).
If you ignore the proprietary routers, the proprietary search engines, the proprietary browsers that people use out-of-the-box (Edge, Safari and even Chrome), and the fact that Linux is a clone of a proprietary OS.
>> That sounds like the 90s argument against FLOSS
> This is still the argument for software copyright.
And open source licensing is based on and relies on copyright. Patents and copyright are different kinds of intellectual property protection and incentivize different things. Copyright in some sense encourages participation and collaboration because you retain ownership of your code. The way patents are used discourages participation and collaboration.
On my new phone I made sure to install F-Droid first thing, and it's surprising how many basic functions are covered by free software if you just bother to look.
I disagree. Video is such a large percentage of internet traffic and licensing fees are so high that it becomes possible for any number of companies to subsidize the development cost of a new codec on their own and still net a profit. Google certainly spends the most money, but they were hardly the only ones involved in AV1. At Mozilla we developed Daala from scratch and had reached performance competitive with H.265 when we stopped to contribute the technology to the AV1 process, and our team's entire budget was a fraction of what the annual licensing fees for H.264 would have been. Cisco developed Thor on their own with just a handful of people and contributed that, as well. Many other companies contributed technology on a royalty-free basis. Outside of AV1, you regularly see things like Samsung's EVC (or LC-EVC, or APV, or...), or the AVS series from the Chinese.... If the patent situation were more tenable, you would see a lot more of these.
The cost of developing the technology is not the limitation. I would argue the cost to get all parties to agree on a common standard and the cost to deploy it widely enough for people to rely on it is much higher, but people manage that on a royalty-free basis for many other standards.
Daala was never meant to be widely adopted in its original form — its complexity alone made that unlikely. There’s a reason why all widely deployed codecs end up using similar coding tools and partitioning schemes: they’re proven, practical, and compatible with real-world hardware.
As for H.265, it’s the result of countless engineering trade-offs. I’m sure if you cherry-picked all the most experimental ideas proposed during its development, you could create a codec that far outperforms H.265 on paper. But that kind of design would never be viable in a real-world product — it wouldn’t meet the constraints of hardware, licensing, or industry adoption.
Now the following is a more general comment, not directed at you.
There’s often a dismissive attitude toward the work done in the H.26x space. You can sometimes see this even in technical meetings when someone proposes a novel but impractical idea and gets frustrated when others don’t immediately embrace it. But there’s a good reason for the conservative approach: codecs aren’t just judged by their theoretical performance; they have to be implementable, efficient, and compatible with real-world constraints. They also have to somehow make financial sense and cannot be given a way without some form of compensation.
Mozilla is just Google from a financial perspective, it's not an independent org, so the financing point stands.
H.264 was something like >90% of all video a few years ago and wasn't it free for streaming if the end user wasn't paying? IIRC someone also paid the fees for an open source version. There were pretty good licensing terms available and all the big players have used it extensively.
Anyway, my point was only that expecting Google to develop every piece of tech in the world and give it all away for free isn't a general model for tech development, whereas IP rights and patent pools are. The free ride ends the moment Google decide they need more profit, feel threatened in some way or get broken up by the government.
Part of the reason h.264 was such a big percentage of video was that they messed up the licencing of the follow up so badly which was supposed to supplant it.
Not that the licencing of h.264 wasn't a mess too. You suggest it was free for web use but they originally only promised not to charge for free streaming up until 2015 and reserved the right to do so once it was embedded in the web. Pressure from Google/Xiph/etc's WebM project forced them to promise not to enforce it after that point either.
Cisco paid for a binary version of a decoder that could be downloaded by Firefox as a plugin. They could only do so because of a loophole around a cap in fees that they were already hitting so it wouldn't cost them more to supply to every Firefox user.
> Free codecs only came along … and it's hardly a robust strategy
Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:
Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.
"Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy"
I don't know about video codecs but MP3 (also part of MPEG) came out of Fraunhofer and was paid by German tax money. It should not have been patented in the first place (and wasn't in Germany).
Free codecs without patent issues were limited to things like Vorbis which never got wide support. There were FOSS codecs for patented algorithms, but those had legal issues in places that enforce software patents.
Pre-Spotify, MP3 players would usually only ship with MP3 support (thus the name), so people would only rip to MP3. Ask any millennial and most of them will never have heard of Ogg.
Pre-Spotify (and pre-iPod) there were plenty of cheap MP3 players that also supported Ogg Vorbis. I owned one, for example. Obviously MP3 was THE standard, but Vorbis reached a good adoption HW wise (basically because it was free as in beer to implement)
Have a look at audio hardware from 10-15 ago (so long after the mp3 player wave ended in first world countries) but basically everything that plays mp3 plays ogg vorbis as well.
There is a lot more audio codecs embedded in other things than there ever were personal music players, by orders of magnitude. Vorbis was ubiquitous in video games, for example.
It’s a bit like developing an F1 car. Or a cutting edge airplane. Lots of small optimizations that have to work together. Sometimes big new ideas emerge but those are rare.
Until the new codec comes to together all those small optimizations aren’t really worth much, so it’s a long term research project with potentially zero return on investement.
And yes, most of the small optimizations are patented, something that I’ve come to understand isnt’t viewed very favorably by most.
>> And yes, most of the small optimizations are patented, something that I’ve come to understand isn’t viewed very favorably by most.
Codecs are like infrastructure not products. From cameras to servers to iPhones, they all have to use the same codecs to interoperate. If someone comes along with a small optimization it's hard enough to deploy that across the industry. If it's patented you've got another obstacle: nobody wants to pay the incremental cost for a small improvement (it's not even incremental cost once you've got free codecs, it's a complete hassle).
They're hardware accelerated so it's not worth making a new codec until you have a big improvement over the prior baseline, because it takes a long time to manufacture and roll out devices that are better. Verifying an optimization is worth it requires testing against a big library of videos using standardized perception metrics, it requires ensuring there's an efficient way to decode it in both hardware and software, including efficient encoding. It's easy to improve one kind of input but regress another. Most of the low hanging fruit is taken already. Just the usual stuff that makes advancing the frontier hard.
This is the sort of project that should be developed and released via open source from academia.
Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.
But education receives a lot of funding from the government.
I think academia should build open source technology (that people can commercialize on their own with the expertise).
Higher education doesn’t need to have massive endowments of real estate and patent portfolio to further educ… administration salaries and vanity building projects.
Academia can serve the world with technology and educated minds.
Incentives in academia as things are is ... uh. Not so awesome.
My expectation from experience when implementing something from a DSP paper is that the result will be unreproducable without contacting the authors for some undisclosed table of magic constants. After obtaining it, the results may match but only for the test images they reported results on. Results on anything else will be much worse.
Also it's normal for techniques from the literature to have computational/memory bandwidth costs two orders of magnitude greater than justified for even their (usually exaggerated) stated levels of performance.
And then their comparison points are almost always inevitably implemented so naively as to make the comparison useless.
It's always difficult because improvements in this domain (like many other engineering domains) are significantly about tradeoffs ... and tradeoffs are difficult to weigh in a pure research environment without the context of concrete applications. They're also difficult to weigh with implementation cleverness having such a big impact particularly since industry heavily drains academia of naturally skilled software engineers.
And as other comments have pointed out, academia is in some sense among the worst of the patent abusers. They'll often develop technology just far enough to lay patent mines around the field, but not far enough to produce something useful out of it. The risk that you spend the significant effort to turn a concept into something usable only to have some patent holder show up with a decade old patent to shake you down is a big incentive against investment.
This is impossible to know. Not that long ago something like Linux would have sounded like a madman's dream to someone with your perspective. It turns out great innovations happen outside the capitalist for-profit context and denying that is very questionable. If anything, those kinds of setups often hinder innovation. How much better would linux be if it was mired in endless licensing agreements, per monthly rates, had a board full of fortune 500 types, and billed each user a patent fee? Or any form of profit incentive 'business logic'?
If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.
Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.
Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.
CCCP was just a collection of existing codecs, they didn't develop their own. Most of the codecs in CCCP were patented. Using it without licenses was technically patent infringement in most places. It's just that nobody ever cared to enforce it on individual end users.
As one of the people that helped start CCCP and was involved extensively through almost its entire lifespan, I think you misunderstand what it means to be "free" in this case. CCCP was "free as in beer" but not "free as in speech", /many/ of the codecs in CCCP were patent encumbered, but were included because there were open-source implementations of them by authors that didn't care about those patents, and many of the licensing arrangements didn't effectively apply to end-users (either due to language or care to prosecute). CCCP also almost exclusively included /decoders/, but /encoders/ are much more likely to be targeted by licensing authorities.
We started CCCP because at the time, anime fansubs were predominantly traded on P2P filesharing services like Kazaa, Gnutella, eDonkey, Direct Connect, and later Bittorrent. The most popular codec pack at the time was K-Lite / Kazaa Codec Pack which was a complete and utter mess, and specifically for fansubbing, it was hard to get subtitles to work properly unless they were hard embedded. Soft-subbing allowed for improvements, and there were a lot of improvements to subtitling in the fansubbing community over the years, one of the biggest came when the Matroska (MKV) container format came about, that allowed arbitrarily different formats/encodings to share a single media container, and the community shifted almost entirely to ASS formatted subtitles, but because an MKV could contain many different encodings, any given MKV file may play correctly or not on any given system. CCCP was intended to provide an authoritative, canonical, single-source way to play fansubbed anime correctly on Windows, and we achieved that objective.
But let's be clear, nobody involved was under any illusions that the MPEG-LA or any other license holders of for instance h264 were fans of our community or what we're doing. Anime fansubbing at all came out of piracy of foreign-language media into the English market via the Internet and P2P filesharing. None of us gave a shit, and the use of Soviet imagery in the CCCP was exactly a nod to the somewhat communist ideal that knowledge and access to media should be free, and that patent encumbering codecs and patenting software isn't just stupid, it's morally wrong. I still strongly feel software patents are evil.
Nonetheless, at no point was CCCP through it's life fully legal/licenses appropriately for usage, and effectively nobody cared, not even the licensing authorities, because the existence of these things made their licenses for encoders more valuable for companies producing media, as it was easier for actual people to consume.
That article is a scare piece designed to spread fear, uncertainty and doubt, to prop up an industry that has already collapsed because everyone else hated them, and make out that they’re the good guys and you should go back to how things were.
> The catch is that while the AV1 developers offer their patents (assuming they have any) on a royalty-free basis, in return they require users of AV1 to agree to license their own patents royalty-free back to them.
Such a huge catch that the companies that offer you a royalty-free license, only do so on the condition that you're not gonna turn around and abuse your own patents against them!
How exactly is that a bad thing?
How is it different from the (unwritten) social contracts of all humans and even of animals? How is it different from the primal instincts?
Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
Who would write a web server? Who would write Curl? Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year? People don't understand that these companies have huge R&D budgets!
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
Google funding free stuff is not a real social mechanism. It's not something you can point to and say that's how society should work in general.
Our industry has come to take Google's enormous corporate generosity for granted, but there was zero need for it to be as helpful to open computing as it has been. It would have been just as successful with YouTube if Chrome was entirely closed source and they paid for video codec licensing, or if they developed entirely closed codecs just for their own use. In fact nearly all Google's codebase is closed source and it hasn't held them back at all.
Google did give a lot away though, and for that we should be very grateful. They not only released a ton of useful code and algorithms for free, they also inspired a culture where other companies also do that sometimes (e.g. Llama). But we should also recognize that relying on the benevolence of 2-3 idealistic billionaires with a browser fetish is a very time and place specific one-off, it's not a thing that can be demanded or generalized.
In general, R&D is costly and requires incentives. Patent pools aren't perfect, but they do work well enough to always be defining the state-of-the-art and establish global standards too (digital TV, DVDs, streaming.... all patent pool based mechanisms).
> Google funding free stuff is not a real social mechanism.
It's not a social mechanism. And it's not generosity.
Google pushes huge amounts of video and audio through YouTube. It's in Google's direct financial interest to have better video and audio codecs implemented and deployed in as many browsers and devices as possible. It reduces Google's costs.
Royalty-free video and audio codecs makes that implementation and deployment more likely in more places.
> Patent pools aren't perfect
They are a long way from perfect. Patent pools will contact you and say, "That's a nice codec you've got there. It'd be a shame if something happened to it."
Three different patent pools are trying to collect licencing fees for AV1:
The question is more, "who would write the HTTP spec?" except instead of sending text back and forth you need experts in compression, visual perception, video formats, etc
Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened. We’re not talking about a software project that you can just hack together, compile, and see if it works. We’re talking about rigorous performance and complexity evaluations, subjective testing, and massive coordination with hardware manufacturers—from chips to displays.
People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
As someone who lead an open source team (of majority volunteers) for nearly a decade at Mozilla, I can tell you that people do work on video codecs for fun, see https://github.com/xiph/daala
Working with fine people from Xiph.Org and the IETF (and later AOM) on royalty free formats Theora, Opus, Daala and AV1 was by far the most fun, interesting and fulfilling work I've had as professional engineer.
Daala had some really good ideas, I only understand the coding tools at the level of a curious codec enthusiast, far from an expert, but it was really fascinating to follow its progress
Actually, are Xiph people still involved in AVM? It seems like it's being developed a little bit differently than AV1. I might have lost track a bit.
People don't develop video codecs for fun because there are patent minefields.
You don't *have* to add all the rigour. If you develop a new technique for video compression, a new container for holding data, etc, you can just try it out and share it with the technical community.
Well, you could, if you weren't afraid of getting sued for infringing on patents.
> Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened.
You wouldn't know if it had already happened, since such a codec would have little chance of success, possibly not even publication. Your proposition is really unprovable in either direction due to the circular feedback on itself.
I don't do video because I don't work with it, but I do image compression for fun and no profit. I do use some video techniques due to the type of images I am compressing. I don't release because of the minefield. I do it because it's fun. The simulation runs and other tasks often I kick to the cloud for the larger compute needs.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
Hmm, let me check my notes:
- Quite OK Image format: https://qoiformat.org/
- Quite OK Audio format: https://qoaformat.org/
- LAME (ain't a MP3 Encoder): https://lame.sourceforge.io/
- Xiph family of codecs: https://xiph.org/
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.
For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.
On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.
> Are you really saying that patents are preventing people from writing the next great video codec?
Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.
If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?
For the license disclaimers in camera manuals, see [0].
Patents, by design, give inventors claims to ideas, which gives them the money to drive progress at a pace that meets their business needs.
Look at data compression. Sperry/Univac controlled key patents and slowed down invention in the space for years. Was it in the interest of these companies or Unisys (their successor) to invest in compression development? Nope.
That’s by design. That moat of exclusivity makes it difficult to compensate people to come up with novel inventions in-scope or even adjacent to the patent. With codecs, the patents are very granular and make it difficult for anyone but the largest players with key financial interests to do much of anything.
Roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since the adoption of Git made detailed tracking possible
The Top 10 organizations sponsoring Linux kernel development since the last report include Intel, Red Hat, Linaro, IBM, Samsung, SUSE, Google, AMD, Renesas and Mellanox
---
curl does seem to be an outlier, but you still need to answer the question: "Who would develop video codecs?" You can't just say "Linux appeared out of thin air", because that's not what happened.
Linux has funding because it serves the interests of a large group of companies that themselves have a source of revenue.
(And to be clear, I do not think that is a bad thing! I prefer it when companies write open source software. But it does skew the design of what open source software is available.)
You could say "Linux was CREATED out of thin air", and I wouldn't argue with you.
But creation only counts for so much -- without support, Linux could still be a hobby project that "won't be big and professional like GNU"
I'm saying Linux didn't APPEAR out of thin air, or at least it's worth looking deeper into the reasons why. "Appearing" to the general public, i.e. making widely useful software, requires a large group of people over a sustained time period, like 10 years.
----
i.e. Right NOW there are probably hundreds of projects like Linux that you haven't heard of, which don't necessarily align with funders
I would actually make the comparison to GNU -- GNU is a successful project, but there are various efforts underneath it that kind of languish.
I'm saying that VIDEO CODECS might be structurally more similar to these projects, than they are to the Linux kernel.
i.e. making a freely-licensed kernel IS aligned with Red Hat, Intel, Google, but making an Intelligent Personal Assistant is probably not.
Somebody probably ALREADY created a good free intelligent personal assistant (or one that COULD BE as great as Linux), but you never heard of them. Because they don't have hundreds of companies and thousands of people aligned with them.
I've used and developed for Linux since 1994 (long before major commercial interests), and I work for Red Hat so it's unlikely I misunderstand how Linux was and is developed.
> It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
That’s hardly true. Nvidia’s tech is covered by patents and licenses. Why else would it be worth 4.5 trillion dollars?
The top AI companies use very restrictive licenses.
I think it’s actually the other way around and AI industry will actually end up following the video coding industry when it comes to patents, royalties, licenses etc.
Because they make and sell a lot of hardware. I'm sure they do have a lot of patents and licences, but if all that disappeared today it'd be years to decades before anyone could compete with them. Even just getting a foot in the door in TSMC's queue of customers would be hard. Their valuation can likely be justified based on their manufacturing position alone. There is literally no-one else who can do what they do, law or otherwise.
If it is a matter of laws, China would just declare the law doesn't count to dodge around the US chip sanctions. Which, admittedly, might happen - but I don't see how that could result in much more freedom than we already have now. Having more Chinese people involved is generally good for prices, but that doesn't have much to do with market structure as much as they work hard and do things at scale.
> The top AI companies use very restrictive licenses.
The top AI companies don't release their best models under any license. They're not even distributed at all. If you did steal the weights out from underneath Anthropic they would take you to court and probably win. Putting software you develop exclusively behind a network interface is a form of ultra-restrictive DRM. Yes, some places are currently trying to buy mindshare by releasing free models and that's fantastic, thank you, but they can only do that because investors believe the ROI from proprietary firewalled models will more than fund it.
NVIDIA's advantage over AMD is largely in the drivers and CUDA i.e. their software. If it weren't for IP law or if NVIDIA had foolishly made their software fully open source, AMD could have just forked their PTX compiler and NVIDIAs advantage would never have been established. In turn that'd have meant they wouldn't have any special privileges at TSMC.
I'm not opposed to codecs having patents but Chiariglione set up a system where each codec has as many patent holders as possible and any one of those patent holders could hold the entire world hostage. They should have set up the patent pool and pricing before developing each codec and not allowed any techniques in the standard that aren't part of the pool.
> Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them.
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
It seems that you have a massive misunderstanding of how this works.
University research labs, usually with a team of no more than 10 people (at most 20), are good at producing early, proof-of-concept work, but not incredibly complex projects like creating an actual codec. They are not known for producing polished, mature commerical products that can be immediately used in the real world. They don't have the resources or the incentive to do so.
The really silly part is that even if you have a license from MPEG LA for your product, you still have to put in a notice like this:
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (I) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (II) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM
It's unclear whether this license covers videoconferencing for work purposes (where you are paid, but not specifically to be on that call). It seems to rule out remote tutoring.
MPEG LA probably did not have much choice here because this language requirement (or language close to it) for outgoing patent licenses is likely part of their incoming patent license agreements. It's probably impossible at this point to renegotiate and align the terms with how people actually use video codecs commercially today.
But it means that you can't get a pool license from MPEG LA that covers commercial videoconferencing, you'd have to negotiate separately with the individual patent holders.
MPEG-7 includes a binary XML standard [0] which is quite useful IMHO in comparison to others (I think it is used in DVB Meta data streams). But beyond patents it is even hard to find open documentation of BIM. I think the group was technically quite competent in comparison with other standard groups, but the business models around it really turn me off.
> "Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken."
Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.
This might be an oversimplification, but as a consumer, I think I see a catch-22 for new codecs. Companies need a big incentive to invest in them, which means the codec has to be technically superior and safe from hidden patent claims. But the only way to know if it's safe is for it to be widely used for a long time. Of course, it can't get widely used without company support in the first place. So, while everyone waits, the technology is no longer superior, and the whole thing fizzles out.
Companies only need a big incentive to invest in new codecs because creating a codec that has a simple incremental improvement would violate existing patents.
Not all codecs are equal, and to be honest, most are probably not optimized/suitable for today's applications, otherwise Google wouldn't have invented their own codec (which then gets adopted widely, fortunately).
Yes, because mpeg got there first, and now their dominance is baked into silicon with hardware acceleration. It's starting to change at last but we have a long way to go. That way would be a lot easier if their patent portfolio just died.
The fact h264 and h265 are known by those terms is key to the other part of the equation: the ITU Video Coding Experts Group has become the dominant forum for setting standards going back to at least 2005.
> all the investments (collectively hundreds of millions USD) made by the industry for the new video codec will go up in smoke and AOM’s royalty free model will spread to other business segments as well.
He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.
> At long last everybody realises that the old MPEG business model is now broke
And the entire post is about how dysfunctional MPEG is and how AOM rose to deal with it. It is tragic to waste so much time and money only to produce nothing. He's criticizing the MPEG group and their infighting. He's literally criticizing MPEG's licensing model and the leadership of the companies in MPEG. He's an MPEG member saying MPEG's business model is broken yet no one has a desire to fix it, so it will be beaten by a competitor. Would you not want to see your own organization reform rather than die?
Reminder AOM is a bunch of megacorps with profit motive too, which is why he thinks this ultimately leads to stalled innovation:
> My concerns are at a different level and have to do with the way industry at large will be able to access innovation. AOM will certainly give much needed stability to the video codec market but this will come at the cost of reduced if not entirely halted technical progress. There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.
> Companies will slash their video compression technology investments, thousands of jobs will go and millions of USD of funding to universities will be cut. A successful “access technology at no cost” model will spread to other fields.
Money is the motivator. Figuring out how to reward investment in pushing the technology forward is his concern. It sounds like he is open to suggestions.
> There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.
I don't think he fully considered the motivations of Alliance members like Google (YouTube), Meta and Netflix and the lengths they'll go to optimize operational costs of delivering content to improve their bottom line.
Fixing a business model that was always a force that slowed down development, implementation and adoption is not something that should be "fixed". MPEG dying is something to celebrate not whine about.
He first points out that a royalty-free format was actually better than the patent-pending alternative that he was responsible for pushing.
In the end, he concludes that the that the progress of video compression would stop if developers can't make money from patents, providing a comparison table on codec improvements that conveniently omits the aforementioned royalty-free code being better than the commercial alternatives pushed by his group.
Besides the above fallacy, the article is simply full of boasting about his own self-importance and religious connotations.
The article does not give much beyond what you already read in the title. What obscure forces and how? Isn’t it an open standards non-profit organisation, then what could possible hinder it?
Maybe because technologically closed standards became better and nonprofit project has no resources to compete with commercial standards?
USB Alliance have been able to work things out, so maybe compression standards should be developed in similar way?
From Leonardo, who founded MPEG, on the page linked:
"Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks."
In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.
The flip side of this is that all fields of compression have a lot to gain from progress in AI.
Which is an interesting view when applied to the IP. I think it's relatively uncontroversial that an MP4 file which "predicts" a Disney movie which it was "trained on" is a derived work. Suppose you have an LLM which was trained on a fairly small set of movies and you could produce any one on demand; would that be treated as a derived work?
If you have a predictor/compressor LLM which was trained on all the movies in the world, would that not also be infringement?
MP4s are compressed data, not a compression algorithm. An MP4 (or any compressed data) is not a “prediction”, it is the difference between what was predicted and what you’re trying to compress.
An LLM is (or can be used) as a compression algorithm, but it is not compressed data. It is possible to have an overfit algorithm exactly predict (or reproduce) an output, but it’s not possible for one to reproduce all the outputs due to the pigeonhole principle.
It is like upscaling. If you could train AI to "upscale" your audio or video you could get away with sending a lot less data. It is already being done with quite amazing results for audio.
This makes zero sense, right? Even if this was applicable, why would it need a standard? There is no interoperability between game servers of different games
Goodbye MPEG group, and to be frank, good riddance I think. I'm glad that open codecs are now taking over on the frontier of SOTA encoding.
Maybe these sorts of handshake agreements and industry collaboration were necessary to get things rolling in 198x. If so, then I thank the MPEG group for starting that work. But by 2005 or so when DivX and XviD and h264 were heating up, it was time to move beyond that model towards open interoperability.
This. You should have to declare the value of a patent, and pay 1% of that value every year to the government. Anyone else can force-purchase it for that value, but leaving you with a free perpetual license.
> The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies.
Copyright is cancer. The faster AI industry is going to run it into the ground, the better.
There's nothing obscure about them.
His comment immediately after describes exactly what happened:
> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.
Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.
Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.
https://www.zdnet.com/article/apple-shuns-mpeg-4-licensing-t...
From ZiffDavis article: > QuickTime 6 media player and QuickTime Broadcaster, a free application that aims to simplify using MPEG-4 in live video feeds over the Net.
It was sweet to see “over the Net”…
I think video over Internet could be a huge business.
In 1998, the idea seemed so ridiculous, TheOnion mocked it:
https://theonion.com/new-5-000-multimedia-computer-system-do...
At the time, the mocking was well deserved. I remember downloading trailers for moves over my dial-up connection. Took the entire night for 3 minutes of video. Can’t imagine paying $5k for that privilege.
Today though, the mocking doesn’t make sense and is confusing. I haven’t ever owned a TV.
By 99 it wasn't that bad. I remember screaming along with V.92 56k modems. Futurama episodes were about 50mb encoded as RealVideo and took a mere two and a half hours to download o.0
(and it really was v.92; I still have the double-bong towards the end of the handshake emblazoned in my memory)
I downloaded episodes of South Park using eMule over dial-up. It took days.
Well back then there was a huge difference in the Internet experience between people at universities and other places with T1s and other fast connections, and everyone else on dial-up. There was a lot of full-length video downloading at universities by 2000. But even on dial-up I seem to remember realplayer and other UDP dumps being pretty popular around this time.
Picking 300MB as a ridiculous amount of data to download dates that nicely without needing to look at the article header.
Though using the codecs and hardware of that time I doubt the quality at even that size would be great. Compare an old 349MB (sized to fit two on a CD-R/-RW, likely 480p though smaller wasn't uncommon) cap of a Stargate episode picked up in the early/mid 20XXs to a similarly sized file compressed using h265 or even h264 on modern hardware.
I appreciate the usage of SG-1 as an example, as I definitely still have several seasons of SG-1 episodes of that size floating around old hard drives somewhere. XVID, of course.
Haha that article is wild. Thanks for sharing
I wonder if the 6000 series from nvidia will finally be able to deliver on the prognostication of being able to make toast with a PC?
You can make a flambé with Nvidia’s new 12VHPWR connectors
I remember when YouTube first appeared and my thought was "This is a really nice service. It's going to be a shame in a couple of years when it runs out of VC money and shuts down."
I also remember when they went through and re-encoded all of the videos so they could play on the original model iPhone.
It's a fad. I'm going long on Blockbuster.
As someone who hasn't had any exposure to the human stories behind mpeg before, it feels to me like it's been a force for evil since long before 2020. Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken.
Possibly, nothing. Codec development is slow and expensive. Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy. Plus free codecs were often built by acquiring companies that had previously been using IP licensing as a business model rather than from-scratch development.
I avoided a career in codecs after spending about a year in college learning about them. The patent minefield meant I couldn't meaningfully build incremental improvements on what existed, and the idea of dilligently dancing around existing patents and then releasing something which intentionally lacked state-of-the-art ideas wasn't compelling.
Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.
Well, a career in codec development means you'd have done it as a job, and so you'd have been angling for a job at the kind of places that enter into the patent pools and contribute to the standards.
I don’t know about you, but I became a software engineer to write code for myself and my own interests, not to get a job where all of my labor will be vacuumed up and exploited to maximize anonymous shareholder value.
That's all great and noble, but at the end of the day it's about who has the resources. If you can get the necessary resources yourself and have complete control over their allocation, congratulations you won the jackpot of life. Plenty of people, some of whom are smarter and better than you, tried to do the same and failed due to reasons beyond their control. Try to remain a good person and not waste the opportunity if you ever get to that stage.
For the remaining 99.99% of us, we have to negotiate for resources as best we can. That typically means maximizing shareholder value in exchange for a cut of the profits. Not all your labor needs to be vacuumed up, I make enough to support my family, live a relatively safe and comfortable life with some minor luxuries and likely a secure retirement. Better deal than most people get today.
Why are you arguing so hard against someone that simply stated “I was interested in pursuing this topic as a career when I was in college but then I learned more about the field and decided to pursue something else”?
Different posters, Taek made the post you're referring to, I'm responding to voakbasda.
Regardless, why are you white-knighting for him? He made a moral argument about career choice, and I responded to said argument as someone who took the other side. This is a discussion board, we discuss things.
More codec development work is done outside of patent-centric organizations by a significant margin. Just like any other domain technological/communication standard the most significant impetuous comes from the drive to make superior products.
Work inside patent driven development groups also suffers substantial complexity bloat because there is a huge incentive for each participant to get a patentable component into the standard in order to benefit from cross-licensing. Often these 'improvements' are insignificant or even a net loss (the cost of the bitstream to signal them on is greater than their improvement over any credible collection of material).
Software patents aren't an issue in much of the world; the reason I thought there wasn't much of a career in codec development was that it was obvious that it needed to move down into custom ASICs to be power-efficient, at which point you can no longer develop new ones until people replace all their hardware.
By the time software is robust enough to make it worth while to be placed into hardware, it's pretty damn efficient. For something like ASICs, you could at least upgrade the firmware with new code, but what about Apple's chips that do the decoding? Can they be upgraded, or does that mean needing to wait for the M++ chip?
Typically you wait for the new chip.
Sometimes there are hybrid coders that can use some of the resources on the chip and some shader code to handle new codecs or codec features after the fact, but you pay a power and performance penalty to use these.
Software patents aren't an issue in most of the world. Codecs however are used all over the world. No one is going to use a codec that is illegal to use in the US and EU.
EU would be one of the places that doesn't have software patents, which is why VLC is based there.
It's not that simple. Software patents exists in the EU, the requirements are much more strict though. For example Netflix was ordered to cease their use of H265 in germany: https://www.nexttv.com/news/achtung-baby-netflix-loses-paten...
IP law, especially defence against submarine patents, makes codec development expensive.
In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.
However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.
That's unnecessarily harsh. Patent pools exist to promote collaboration in a world with aggressive IP legislation, they are an answer to a specific environment and they incentivize participants to share their IP at a reasonable price to third parties. The incentive being that you will be left out of the pool, the other members will work around your patents while not licensing their own patents to you, so your own IP is now worthless since you can't work around theirs.
As long as IP law continues in the same form, the alternative to that is completely closed agreements among major companies that will push their own proprietary formats and aggressively enforce their patents.
The fair world where everyone is free to create a new thing, improve upon the frontier codecs, and get a fair reward for their efforts, is simply a fantasy without patent law reform. In the current geopolitical climate, it's very very unlikely for nations where these developments traditionally happened, such as US and western Europe, to weaken their IP laws.
>> That's unnecessarily harsh. Patent pools exist to promote collaboration in a world with aggressive IP legislation, they are an answer to a specific environment and they incentivize participants to share their IP at a reasonable price to third parties.
You can say that, but this discussion is in response to the guy who started MPEG and later shut it down. I don't think he'd say its harsh.
They actually messed up the basic concept of a patent pool, and that is the key to their failure.
They didn't get people to agree on terms up front, they made the final codec with interlocking patents embedded from hundreds of parties and made no attempt to avoid random outsider's patents and then once it was done tried to come to a licence agreement when every minor patent holder had an effective veto on the resulting pool. That's how you end up with multiple pools plus people who own patents and aren't members of any of the pools. It's ridiculous.
My minor conspiracy theory is that if you did it right, then you'd basically end up with something close to open source codecs as that's the best overall outcome.
Everyone benefits from only putting in freely available ideas. So if you want to gouge people with your patents you need to mess this up and "accidentally" create a patent mess.
Patent pools exist to make infeasible system look not so infeasible so people won't recoginize how it's stifling innovation and abolish it.
IP law and the need for extremely smart people with a rare set of narrow skills. It's not like codec development magically happens for free if you ignore patents.
The point is, if there had been no incentives to develop codecs, there would have been no MPEG. Other people would have stepped into the void and sometimes did, e.g. RealVideo, but without legal IP protection the codecs would just have been entirely undocumented and heavily obfuscated, relying on tamper-proofed ASICs much faster.
You continue to make the same unsubstantiated claims about codecs being hard and expensive. These same tropes were said about every other field, and even if true, we have tens of thousands of folks that would like to participate, but are locked out due to broken IP law.
The firewall of patents exist precisely because digital video is a way to shakedown the route media would have to travel to get to the end user.
Codecs are not, "harder than" compilers, yet the field of compilers was blown completely open by GCC. Capital didn't see the market opportunity because there wasn't the same possibility of being a gatekeeper for so much attention and money.
The patents aren't because it is difficult, the patents are there because they can extract money from the revenue streams.
Codecs not harder than compilers? Sounds like an unsubstantiated claim!
Modern video codecs are harder than compilers. You have to have good ASIC development expertise to do them right, for example, which you don't need for compilers. It's totally feasible for a single company to develop a leading edge compiler whereas you don't see that in video codecs, historically they've been collaborations.
(I've worked on both codecs and compilers. You may be underestimating the difficulty of implementing sound optimizers).
Hardware vendors don't benefit from the patent pools. They usually get nothing from them, and are burdened by having to pass per-unit licensing costs on to their customers.
It's true that designing an ASIC-friendly codec needs special considerations, and benefits from close collaboration with hardware vendors, but it's not magic. The general constraints are well-known to codec designers (in open-source too). The commercial incentives for collaboration are already there — HW vendors will profit from selling the chipsets or licensing the HW design.
The patent situation is completely broken. The commercial codecs "invent" coding features of dubious utility, mostly unnecessary tweaks on old stuff, because everyone wants to have their patent in the pool. It ends up being a political game, because the engineering goal is to make the simplest most effective codec, but the financial incentive is to approve everyone's patented add-ons regardless of whether they're worth the complexity or not.
Meanwhile everything that isn't explicitly covered by a patent needs to be proven to be 20 years old, and this limits MPEG too. Otherwise nobody can prove that there won't be any submarine patent that could be used to set up a competing patent pool and extort MPEG's customers.
So our latest-and-greatest codecs are built on 20-year-old ideas, with or without some bells and whistles added. The ASICs often don't use the bells and whistles anyway, because the extra coding features may not even be suitable for ASICs, and usually have diminishing returns (like 3x slower encode for 1% better quality/filesize ratio).
With all due respect, to say that codecs are more difficult to get right than optimizing compilers is absurd.
The only reason I can think of why you would say this is that nowadays we have good compiler infrastructure that works with many hardware architectures and it has become easy to create or modify compilers. But that's only due to the fact that it was so insanely complicated that it had to be redone from scratch to become generalizible, which led to LLVM and the subsequent direct and indirect benefits everywhere. That's the work of thousands of the smartest people over 30 years.
There is no way that a single company could develop a state of the art compiler without using an existing one. Intel had a good independent compiler and gave up because open source had become superior.
For what it's worth, look at the state of FPGA compilers. They are so difficult that every single one of them that exists is utter shit. I wish it were different.
> There is no way that a single company could develop a state of the art compiler without using an existing one. Intel had a good independent compiler and gave up because open source had become superior.
Not only can they do it but some companies have done it several times. Look at Oracle: there's HotSpot's C2 compiler, and the Graal compiler. Both state of the art, both developed by one company.
Not unique. Microsoft and Apple have built many compilers alone over their lifespan.
This whole thing is insanely subjective, but that's why I'm making fun of the "unsubstantiated claim" bit. How exactly are you meant to objectively compare this?
That sounds like the 90s argument against FLOSS: without the incentive for people to sell software, nobody would write it.
Software wasn't always covered by copyright, and people wrote it all the same. In fact they even sold it, just built-to-order as opposed to any kind of retail mass market. (Technically, there was no mass market for computers back then so that goes without saying.)
That argument seems to have been proven basically correct, given that a ton of open source development happens only because companies with deep pockets pay for the developers' time. Which makes perfect sense - no matter how altruistic a person is, they have to pay rent and buy food just like everyone else, and a lot of people aren't going to have time/energy to develop software for free after they get home from their 9-5.
Without IP protections that allow copyleft to exist arguably there would be no FOSS. When anything you publish can be leveraged and expropriated by Microsoft et al. without them being obligated to contribute back or even credit you, you are just an unpaid ghost engineer for big tech.
I thought your argument was that Microsoft wouldn't be able to exist in that world. Which is it?
Why would it not be able to exist?
This is still the argument for software copyright. And I think it's still a pretty persuasive argument, despite the success of FLOSS. To this day, there is very little successful consumer software. Outside of browsers, Ubuntu, Libre Office, and GIMP are more or less it, at least outside certain niches. And even they are a pretty tiny compared to Windows/MacOS/iOS/Android, Office/Google Docs, or Photoshop.
The browsers are an interesting case. Neither Chrome nor Edge are really open source, despite Chromium being so, and they are both funded by advertising and marketing money from huge corporations. Safari is of course closed source. And Firefox is an increasingly tiny runner-up. So I don't know if I'd really count Chromium as a FLOSS success story.
Overall, I don't think FLOSS has had the kind of effect that many activists were going for. What has generally happened is that companies building software have realized that there is a lot of value to be found in treating FLOSS software as a kind of barter agreement between companies, where maybe Microsoft helps improve Linux for the benefit of all, but in turn it gets to use, say, Google's efforts on Chromium, and so on. The fact that other companies then get to mooch off of these big collaborations doesn't really matter compared to getting rid of the hassle of actually setting up explicit agreements with so many others.
The value of OSS is estimated at about $9 trillion dollars. That’s more valuable than any company on earth.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148
Sure. Almost all of it supported by companies who sell software, hardware, or ads.
> don't think FLOSS has had the kind of effect that many activists were going for
The entire internet, end to end, runs on FLOSS.
That's great, but it's not what FLOSS activists hoped and fight for.
It's still almost impossible to have a digital life that doesn't involve significant use of proprietary software, and the vast majority of users do their computing almost exclusively through proprietary software. The fact that this proprietary software is a bit of glue on top of a bunch of FLOSS libraries possibly running on a FLOSS kernel that uses FLOSS libraries to talk to a FLOSS router doesn't really buy much actual freedom for the end users. They're still locked in to the proprietary software vendors just as much as they were in the 90s (perhaps paying with their private data instead of actual money).
If you ignore the proprietary routers, the proprietary search engines, the proprietary browsers that people use out-of-the-box (Edge, Safari and even Chrome), and the fact that Linux is a clone of a proprietary OS.
>> That sounds like the 90s argument against FLOSS
> This is still the argument for software copyright.
And open source licensing is based on and relies on copyright. Patents and copyright are different kinds of intellectual property protection and incentivize different things. Copyright in some sense encourages participation and collaboration because you retain ownership of your code. The way patents are used discourages participation and collaboration.
On my new phone I made sure to install F-Droid first thing, and it's surprising how many basic functions are covered by free software if you just bother to look.
> ...it's hardly a robust strategy.
I disagree. Video is such a large percentage of internet traffic and licensing fees are so high that it becomes possible for any number of companies to subsidize the development cost of a new codec on their own and still net a profit. Google certainly spends the most money, but they were hardly the only ones involved in AV1. At Mozilla we developed Daala from scratch and had reached performance competitive with H.265 when we stopped to contribute the technology to the AV1 process, and our team's entire budget was a fraction of what the annual licensing fees for H.264 would have been. Cisco developed Thor on their own with just a handful of people and contributed that, as well. Many other companies contributed technology on a royalty-free basis. Outside of AV1, you regularly see things like Samsung's EVC (or LC-EVC, or APV, or...), or the AVS series from the Chinese.... If the patent situation were more tenable, you would see a lot more of these.
The cost of developing the technology is not the limitation. I would argue the cost to get all parties to agree on a common standard and the cost to deploy it widely enough for people to rely on it is much higher, but people manage that on a royalty-free basis for many other standards.
You’re comparing apples to oranges.
Daala was never meant to be widely adopted in its original form — its complexity alone made that unlikely. There’s a reason why all widely deployed codecs end up using similar coding tools and partitioning schemes: they’re proven, practical, and compatible with real-world hardware.
As for H.265, it’s the result of countless engineering trade-offs. I’m sure if you cherry-picked all the most experimental ideas proposed during its development, you could create a codec that far outperforms H.265 on paper. But that kind of design would never be viable in a real-world product — it wouldn’t meet the constraints of hardware, licensing, or industry adoption.
Now the following is a more general comment, not directed at you.
There’s often a dismissive attitude toward the work done in the H.26x space. You can sometimes see this even in technical meetings when someone proposes a novel but impractical idea and gets frustrated when others don’t immediately embrace it. But there’s a good reason for the conservative approach: codecs aren’t just judged by their theoretical performance; they have to be implementable, efficient, and compatible with real-world constraints. They also have to somehow make financial sense and cannot be given a way without some form of compensation.
Mozilla is just Google from a financial perspective, it's not an independent org, so the financing point stands.
H.264 was something like >90% of all video a few years ago and wasn't it free for streaming if the end user wasn't paying? IIRC someone also paid the fees for an open source version. There were pretty good licensing terms available and all the big players have used it extensively.
Anyway, my point was only that expecting Google to develop every piece of tech in the world and give it all away for free isn't a general model for tech development, whereas IP rights and patent pools are. The free ride ends the moment Google decide they need more profit, feel threatened in some way or get broken up by the government.
Part of the reason h.264 was such a big percentage of video was that they messed up the licencing of the follow up so badly which was supposed to supplant it.
Not that the licencing of h.264 wasn't a mess too. You suggest it was free for web use but they originally only promised not to charge for free streaming up until 2015 and reserved the right to do so once it was embedded in the web. Pressure from Google/Xiph/etc's WebM project forced them to promise not to enforce it after that point either.
https://www.wired.com/2010/08/mpeg-la-extends-web-video-lice...
Cisco paid for a binary version of a decoder that could be downloaded by Firefox as a plugin. They could only do so because of a loophole around a cap in fees that they were already hitting so it wouldn't cost them more to supply to every Firefox user.
> Free codecs only came along … and it's hardly a robust strategy
Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:
https://www.quora.com/Is-it-true-that-in-1994-the-company-wh...
Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.
> I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages
The GIF page on wikipedia has an entire section for the patent troubles https://en.wikipedia.org/wiki/GIF#Unisys_and_LZW_patent_enfo...
It's not just about new codecs. There's also people making products that would use codecs just deciding not to because of the patent hassle.
"Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy"
I don't know about video codecs but MP3 (also part of MPEG) came out of Fraunhofer and was paid by German tax money. It should not have been patented in the first place (and wasn't in Germany).
> Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born
The release of VP3 as open source predates Google's later acquisition of On2 (2010) by nearly a decade.
Free codecs have been available a long time, surely, as we could install them in Linux distributions in 2005 or earlier?
(I know nothing about the legal side of all this, just remembering the time period of Ubuntu circa 2005-2008).
Free codecs without patent issues were limited to things like Vorbis which never got wide support. There were FOSS codecs for patented algorithms, but those had legal issues in places that enforce software patents.
AV1, VP9, and Opus are used on YouTube and Netflix right now.
It's hard to get more mainstream than YouTube and Netflix.
Now, not in 2005.
> which never got wide support
Source? I’ve seen Vorbis used in a whole bunch of places.
Notably, Spotify only used Vorbis for a while (still does, but also includes AAC now, for Apple platforms I think).
Pre-Spotify, MP3 players would usually only ship with MP3 support (thus the name), so people would only rip to MP3. Ask any millennial and most of them will never have heard of Ogg.
Pre-Spotify (and pre-iPod) there were plenty of cheap MP3 players that also supported Ogg Vorbis. I owned one, for example. Obviously MP3 was THE standard, but Vorbis reached a good adoption HW wise (basically because it was free as in beer to implement)
I also owned one but I had to look for it. It certainly wasn’t “widely supported.”
Have a look at audio hardware from 10-15 ago (so long after the mp3 player wave ended in first world countries) but basically everything that plays mp3 plays ogg vorbis as well.
Of course, but this is not what I’d call “never got wide support”.
So you’d say that a format that most consumers couldn’t use (because only a few devices could play it) is “widely supported?”
There is a lot more audio codecs embedded in other things than there ever were personal music players, by orders of magnitude. Vorbis was ubiquitous in video games, for example.
I had an MP3 player that did Vorbis.
For the uninitiated, could you describe why codec development is slow and expensive?
It’s a bit like developing an F1 car. Or a cutting edge airplane. Lots of small optimizations that have to work together. Sometimes big new ideas emerge but those are rare.
Until the new codec comes to together all those small optimizations aren’t really worth much, so it’s a long term research project with potentially zero return on investement.
And yes, most of the small optimizations are patented, something that I’ve come to understand isnt’t viewed very favorably by most.
>> And yes, most of the small optimizations are patented, something that I’ve come to understand isn’t viewed very favorably by most.
Codecs are like infrastructure not products. From cameras to servers to iPhones, they all have to use the same codecs to interoperate. If someone comes along with a small optimization it's hard enough to deploy that across the industry. If it's patented you've got another obstacle: nobody wants to pay the incremental cost for a small improvement (it's not even incremental cost once you've got free codecs, it's a complete hassle).
They're hardware accelerated so it's not worth making a new codec until you have a big improvement over the prior baseline, because it takes a long time to manufacture and roll out devices that are better. Verifying an optimization is worth it requires testing against a big library of videos using standardized perception metrics, it requires ensuring there's an efficient way to decode it in both hardware and software, including efficient encoding. It's easy to improve one kind of input but regress another. Most of the low hanging fruit is taken already. Just the usual stuff that makes advancing the frontier hard.
This is the sort of project that should be developed and released via open source from academia.
Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.
Universities love patent licensing. I don't think academia is the solution you're looking for.
The solution to that is to remove the ability to patent codecs.
I think we should go a step further and remove the ability to patent algorithms (software)
Some people even think we should remove intellectual property.
So do companies.
But education receives a lot of funding from the government.
I think academia should build open source technology (that people can commercialize on their own with the expertise).
Higher education doesn’t need to have massive endowments of real estate and patent portfolio to further educ… administration salaries and vanity building projects.
Academia can serve the world with technology and educated minds.
You're also describing technologies with universal use and potential for long term rent seeking.
Basically MBA drool material.
Yeah, and if MBAs want to reap that reward, they need to fund the development exclusively without government funding.
Incentives in academia as things are is ... uh. Not so awesome.
My expectation from experience when implementing something from a DSP paper is that the result will be unreproducable without contacting the authors for some undisclosed table of magic constants. After obtaining it, the results may match but only for the test images they reported results on. Results on anything else will be much worse.
Also it's normal for techniques from the literature to have computational/memory bandwidth costs two orders of magnitude greater than justified for even their (usually exaggerated) stated levels of performance.
And then their comparison points are almost always inevitably implemented so naively as to make the comparison useless.
It's always difficult because improvements in this domain (like many other engineering domains) are significantly about tradeoffs ... and tradeoffs are difficult to weigh in a pure research environment without the context of concrete applications. They're also difficult to weigh with implementation cleverness having such a big impact particularly since industry heavily drains academia of naturally skilled software engineers.
And as other comments have pointed out, academia is in some sense among the worst of the patent abusers. They'll often develop technology just far enough to lay patent mines around the field, but not far enough to produce something useful out of it. The risk that you spend the significant effort to turn a concept into something usable only to have some patent holder show up with a decade old patent to shake you down is a big incentive against investment.
This is impossible to know. Not that long ago something like Linux would have sounded like a madman's dream to someone with your perspective. It turns out great innovations happen outside the capitalist for-profit context and denying that is very questionable. If anything, those kinds of setups often hinder innovation. How much better would linux be if it was mired in endless licensing agreements, per monthly rates, had a board full of fortune 500 types, and billed each user a patent fee? Or any form of profit incentive 'business logic'?
If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.
Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.
Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.
"Free codecs only came along at all because Google decided to subsidize development"
No, just no. We've had free community codec packs for years before Google even existed. Anyone remember CCCP?
CCCP was just a collection of existing codecs, they didn't develop their own. Most of the codecs in CCCP were patented. Using it without licenses was technically patent infringement in most places. It's just that nobody ever cared to enforce it on individual end users.
Yes. Those won’t help you if you use them for commercial use and patent holders find out about it.
As one of the people that helped start CCCP and was involved extensively through almost its entire lifespan, I think you misunderstand what it means to be "free" in this case. CCCP was "free as in beer" but not "free as in speech", /many/ of the codecs in CCCP were patent encumbered, but were included because there were open-source implementations of them by authors that didn't care about those patents, and many of the licensing arrangements didn't effectively apply to end-users (either due to language or care to prosecute). CCCP also almost exclusively included /decoders/, but /encoders/ are much more likely to be targeted by licensing authorities.
We started CCCP because at the time, anime fansubs were predominantly traded on P2P filesharing services like Kazaa, Gnutella, eDonkey, Direct Connect, and later Bittorrent. The most popular codec pack at the time was K-Lite / Kazaa Codec Pack which was a complete and utter mess, and specifically for fansubbing, it was hard to get subtitles to work properly unless they were hard embedded. Soft-subbing allowed for improvements, and there were a lot of improvements to subtitling in the fansubbing community over the years, one of the biggest came when the Matroska (MKV) container format came about, that allowed arbitrarily different formats/encodings to share a single media container, and the community shifted almost entirely to ASS formatted subtitles, but because an MKV could contain many different encodings, any given MKV file may play correctly or not on any given system. CCCP was intended to provide an authoritative, canonical, single-source way to play fansubbed anime correctly on Windows, and we achieved that objective.
But let's be clear, nobody involved was under any illusions that the MPEG-LA or any other license holders of for instance h264 were fans of our community or what we're doing. Anime fansubbing at all came out of piracy of foreign-language media into the English market via the Internet and P2P filesharing. None of us gave a shit, and the use of Soviet imagery in the CCCP was exactly a nod to the somewhat communist ideal that knowledge and access to media should be free, and that patent encumbering codecs and patenting software isn't just stupid, it's morally wrong. I still strongly feel software patents are evil.
Nonetheless, at no point was CCCP through it's life fully legal/licenses appropriately for usage, and effectively nobody cared, not even the licensing authorities, because the existence of these things made their licenses for encoders more valuable for companies producing media, as it was easier for actual people to consume.
Why not just use AI?
Why not ask about blockchains?
Not sure why you are downvoted as you seem to be one of the few who knows even a little about codec development.
And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...
That article is a scare piece designed to spread fear, uncertainty and doubt, to prop up an industry that has already collapsed because everyone else hated them, and make out that they’re the good guys and you should go back to how things were.
> The catch is that while the AV1 developers offer their patents (assuming they have any) on a royalty-free basis, in return they require users of AV1 to agree to license their own patents royalty-free back to them.
Such a huge catch that the companies that offer you a royalty-free license, only do so on the condition that you're not gonna turn around and abuse your own patents against them!
How exactly is that a bad thing?
How is it different from the (unwritten) social contracts of all humans and even of animals? How is it different from the primal instincts?
At least two of the members of ipeurope are companies you could use as ann argument why we shouldn't have patents at all.
> And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...
Unsurprisingly companies that are losing money because their rent-seeking on media codecs is now over will spread FUD [0] about royalty free codecs.
[0] https://en.wikipedia.org/wiki/Fear%2C_uncertainty_and_doubt
Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
Who would write a web server? Who would write Curl? Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year? People don't understand that these companies have huge R&D budgets!
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
Google funding free stuff is not a real social mechanism. It's not something you can point to and say that's how society should work in general.
Our industry has come to take Google's enormous corporate generosity for granted, but there was zero need for it to be as helpful to open computing as it has been. It would have been just as successful with YouTube if Chrome was entirely closed source and they paid for video codec licensing, or if they developed entirely closed codecs just for their own use. In fact nearly all Google's codebase is closed source and it hasn't held them back at all.
Google did give a lot away though, and for that we should be very grateful. They not only released a ton of useful code and algorithms for free, they also inspired a culture where other companies also do that sometimes (e.g. Llama). But we should also recognize that relying on the benevolence of 2-3 idealistic billionaires with a browser fetish is a very time and place specific one-off, it's not a thing that can be demanded or generalized.
In general, R&D is costly and requires incentives. Patent pools aren't perfect, but they do work well enough to always be defining the state-of-the-art and establish global standards too (digital TV, DVDs, streaming.... all patent pool based mechanisms).
> Google funding free stuff is not a real social mechanism.
It's not a social mechanism. And it's not generosity.
Google pushes huge amounts of video and audio through YouTube. It's in Google's direct financial interest to have better video and audio codecs implemented and deployed in as many browsers and devices as possible. It reduces Google's costs.
Royalty-free video and audio codecs makes that implementation and deployment more likely in more places.
> Patent pools aren't perfect
They are a long way from perfect. Patent pools will contact you and say, "That's a nice codec you've got there. It'd be a shame if something happened to it."
Three different patent pools are trying to collect licencing fees for AV1:
https://www.sisvel.com/licensing-programmes/audio-and-video-...
https://accessadvance.com/licensing-programs/vdp-pool/
https://www.avanci.com/video/
These are bad comparisons
The question is more, "who would write the HTTP spec?" except instead of sending text back and forth you need experts in compression, visual perception, video formats, etc
Did TBL need to patent the HTTP spec?
Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened. We’re not talking about a software project that you can just hack together, compile, and see if it works. We’re talking about rigorous performance and complexity evaluations, subjective testing, and massive coordination with hardware manufacturers—from chips to displays.
People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
As someone who lead an open source team (of majority volunteers) for nearly a decade at Mozilla, I can tell you that people do work on video codecs for fun, see https://github.com/xiph/daala
Working with fine people from Xiph.Org and the IETF (and later AOM) on royalty free formats Theora, Opus, Daala and AV1 was by far the most fun, interesting and fulfilling work I've had as professional engineer.
Daala had some really good ideas, I only understand the coding tools at the level of a curious codec enthusiast, far from an expert, but it was really fascinating to follow its progress
Actually, are Xiph people still involved in AVM? It seems like it's being developed a little bit differently than AV1. I might have lost track a bit.
People don't develop video codecs for fun because there are patent minefields.
You don't *have* to add all the rigour. If you develop a new technique for video compression, a new container for holding data, etc, you can just try it out and share it with the technical community.
Well, you could, if you weren't afraid of getting sued for infringing on patents.
> Are you really saying that patents are preventing people from writing the next great video codec?
Yes, that’s exactly what people are saying.
People are also saying that companies aren’t writing video codecs.
In both cases, they can be sued for patent infringement if they do.
> Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened.
You wouldn't know if it had already happened, since such a codec would have little chance of success, possibly not even publication. Your proposition is really unprovable in either direction due to the circular feedback on itself.
I don't do video because I don't work with it, but I do image compression for fun and no profit. I do use some video techniques due to the type of images I am compressing. I don't release because of the minefield. I do it because it's fun. The simulation runs and other tasks often I kick to the cloud for the larger compute needs.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
Hmm, let me check my notes:
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.
On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.
> Are you really saying that patents are preventing people from writing the next great video codec?
Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.
If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?
For the license disclaimers in camera manuals, see [0].
[0]: https://news.ycombinator.com/item?id=42736254
Patents, by design, give inventors claims to ideas, which gives them the money to drive progress at a pace that meets their business needs.
Look at data compression. Sperry/Univac controlled key patents and slowed down invention in the space for years. Was it in the interest of these companies or Unisys (their successor) to invest in compression development? Nope.
That’s by design. That moat of exclusivity makes it difficult to compensate people to come up with novel inventions in-scope or even adjacent to the patent. With codecs, the patents are very granular and make it difficult for anyone but the largest players with key financial interests to do much of anything.
> Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year?
You might be misunderstanding that almost all of Linux development is funded by the same kind of companies that fund MPEG development.
It's not "engineers in their basement", and never was
https://www.linuxfoundation.org/about/members
e.g. Red Hat, Intel, Oracle, Google, and now MICROSOFT itself (the competitive landscape changed)
This has LONG been the case, e.g. an article from 2008:
https://www.informationweek.com/it-sectors/linux-contributor...
2017 Linux Foundation Report: https://www.linuxfoundation.org/press/press-release/linux-fo...
Roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since the adoption of Git made detailed tracking possible
The Top 10 organizations sponsoring Linux kernel development since the last report include Intel, Red Hat, Linaro, IBM, Samsung, SUSE, Google, AMD, Renesas and Mellanox
---
curl does seem to be an outlier, but you still need to answer the question: "Who would develop video codecs?" You can't just say "Linux appeared out of thin air", because that's not what happened.
Linux has funding because it serves the interests of a large group of companies that themselves have a source of revenue.
(And to be clear, I do not think that is a bad thing! I prefer it when companies write open source software. But it does skew the design of what open source software is available.)
> You can't just say "Linux appeared out of thin air", because that's not what happened.
It kinda did though https://en.wikipedia.org/wiki/Linux#Creation !
The corporate support you mentioned arrived years after that.
You could say "Linux was CREATED out of thin air", and I wouldn't argue with you.
But creation only counts for so much -- without support, Linux could still be a hobby project that "won't be big and professional like GNU"
I'm saying Linux didn't APPEAR out of thin air, or at least it's worth looking deeper into the reasons why. "Appearing" to the general public, i.e. making widely useful software, requires a large group of people over a sustained time period, like 10 years.
----
i.e. Right NOW there are probably hundreds of projects like Linux that you haven't heard of, which don't necessarily align with funders
I would actually make the comparison to GNU -- GNU is a successful project, but there are various efforts underneath it that kind of languish.
Look at High Priority Free Software Projects - https://www.fsf.org/campaigns/priority-projects/
- Decentralization, federation, and self-hosting
- Free drivers, firmware, and hardware designs
- Real-time voice and video chat
- Internationalization of free software
- Security by and for free software
- Intelligent personal assistant
I'm saying that VIDEO CODECS might be structurally more similar to these projects, than they are to the Linux kernel.
i.e. making a freely-licensed kernel IS aligned with Red Hat, Intel, Google, but making an Intelligent Personal Assistant is probably not.
Somebody probably ALREADY created a good free intelligent personal assistant (or one that COULD BE as great as Linux), but you never heard of them. Because they don't have hundreds of companies and thousands of people aligned with them.
I've used and developed for Linux since 1994 (long before major commercial interests), and I work for Red Hat so it's unlikely I misunderstand how Linux was and is developed.
> It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
That’s hardly true. Nvidia’s tech is covered by patents and licenses. Why else would it be worth 4.5 trillion dollars?
The top AI companies use very restrictive licenses.
I think it’s actually the other way around and AI industry will actually end up following the video coding industry when it comes to patents, royalties, licenses etc.
Because they make and sell a lot of hardware. I'm sure they do have a lot of patents and licences, but if all that disappeared today it'd be years to decades before anyone could compete with them. Even just getting a foot in the door in TSMC's queue of customers would be hard. Their valuation can likely be justified based on their manufacturing position alone. There is literally no-one else who can do what they do, law or otherwise.
If it is a matter of laws, China would just declare the law doesn't count to dodge around the US chip sanctions. Which, admittedly, might happen - but I don't see how that could result in much more freedom than we already have now. Having more Chinese people involved is generally good for prices, but that doesn't have much to do with market structure as much as they work hard and do things at scale.
> The top AI companies use very restrictive licenses.
These models are supported by the Apache 2.0 license ~ https://openai.com/open-models/
Are they lying to me? It is hard to get much more permissive than Apache 2.
The top AI companies don't release their best models under any license. They're not even distributed at all. If you did steal the weights out from underneath Anthropic they would take you to court and probably win. Putting software you develop exclusively behind a network interface is a form of ultra-restrictive DRM. Yes, some places are currently trying to buy mindshare by releasing free models and that's fantastic, thank you, but they can only do that because investors believe the ROI from proprietary firewalled models will more than fund it.
NVIDIA's advantage over AMD is largely in the drivers and CUDA i.e. their software. If it weren't for IP law or if NVIDIA had foolishly made their software fully open source, AMD could have just forked their PTX compiler and NVIDIAs advantage would never have been established. In turn that'd have meant they wouldn't have any special privileges at TSMC.
I imagine a chunk of it is also covered by trade secrets and NDAs.
I'm not opposed to codecs having patents but Chiariglione set up a system where each codec has as many patent holders as possible and any one of those patent holders could hold the entire world hostage. They should have set up the patent pool and pricing before developing each codec and not allowed any techniques in the standard that aren't part of the pool.
Hey, I attend MPEG regularly (mostly lvc lately), there's a chance we’ve crossed paths!
> Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them.
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
Yeah, counting on governments to develop codecs optimized for fast evolving applications for web and live streaming is a great idea.
If we did that we would probably be stuck with low-bitrate 720p videos on YouTube.
> Yeah, counting on governments to develop codecs optimized for fast evolving applications for web and live streaming is a great idea.
Give universities the money, let them care about the details.
It seems that you have a massive misunderstanding of how this works.
University research labs, usually with a team of no more than 10 people (at most 20), are good at producing early, proof-of-concept work, but not incredibly complex projects like creating an actual codec. They are not known for producing polished, mature commerical products that can be immediately used in the real world. They don't have the resources or the incentive to do so.
The really silly part is that even if you have a license from MPEG LA for your product, you still have to put in a notice like this:
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (I) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (II) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM
It's unclear whether this license covers videoconferencing for work purposes (where you are paid, but not specifically to be on that call). It seems to rule out remote tutoring.
MPEG LA probably did not have much choice here because this language requirement (or language close to it) for outgoing patent licenses is likely part of their incoming patent license agreements. It's probably impossible at this point to renegotiate and align the terms with how people actually use video codecs commercially today.
But it means that you can't get a pool license from MPEG LA that covers commercial videoconferencing, you'd have to negotiate separately with the individual patent holders.
MPEG-7 includes a binary XML standard [0] which is quite useful IMHO in comparison to others (I think it is used in DVB Meta data streams). But beyond patents it is even hard to find open documentation of BIM. I think the group was technically quite competent in comparison with other standard groups, but the business models around it really turn me off.
[0] https://mpeg.chiariglione.org/standards/mpeg-7/reference-sof...
EDIT: Here is the Wikipedia page of BiM which evidently made it even into an ISO Standard [1]
[1] https://en.m.wikipedia.org/wiki/BiM
Interesting. I've used EXI in a past project but I hadn't heard of BiM.
> "Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken."
Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.
The power of H264 and H265 comes from pirates, and since AV1 team don't work with pirates then it will always be inferior to H265.
Just check pirated releases of TV shows and movies.
Yeah, he ran an incubator for patent trolls for 30 years and now the patent trolls have eaten his face.
Enough codecs out there. Just no adoption.
This might be an oversimplification, but as a consumer, I think I see a catch-22 for new codecs. Companies need a big incentive to invest in them, which means the codec has to be technically superior and safe from hidden patent claims. But the only way to know if it's safe is for it to be widely used for a long time. Of course, it can't get widely used without company support in the first place. So, while everyone waits, the technology is no longer superior, and the whole thing fizzles out.
Jxl has been around for years.
Av1 for 7
The problem is every platform wants to force their own codec, and get earn royalties from the rest of the world.
They literally sabotaging it. Jxl support even got removed from chrome.
Investment in adopting in software is next to 0.
In hardware it’s a different story, and I’m not sure to what extent which codec can be properly accelerated
Companies only need a big incentive to invest in new codecs because creating a codec that has a simple incremental improvement would violate existing patents.
Not all codecs are equal, and to be honest, most are probably not optimized/suitable for today's applications, otherwise Google wouldn't have invented their own codec (which then gets adopted widely, fortunately).
Yes, because mpeg got there first, and now their dominance is baked into silicon with hardware acceleration. It's starting to change at last but we have a long way to go. That way would be a lot easier if their patent portfolio just died.
Because every codec has 3+ different patent pools wanting rent. Each with different terms.
At least for MP3, our collective nightmare is over. MP3 is completely patent-unencumbered and can be used freely.
The fact h264 and h265 are known by those terms is key to the other part of the equation: the ITU Video Coding Experts Group has become the dominant forum for setting standards going back to at least 2005.
To me, 2007 is when the evil forces really took hold. mySpace era was the last fun era. Everything after that kind of lacks.
> My Christian Catholic education made and still makes me think that everybody should have a mission that extends beyond their personal interests.
I remember this same guy complaining investments in the MPEG extortionist group would disappear because they couldn't fight against AV1.
He was part of a patent Mafia is is only lamenting he lost power.
Hypocrisy in its finest form.
Any link to his comment?
> all the investments (collectively hundreds of millions USD) made by the industry for the new video codec will go up in smoke and AOM’s royalty free model will spread to other business segments as well.
https://blog.chiariglione.org/a-crisis-the-causes-and-a-solu...
He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.
You missed the first part of that quote:
> At long last everybody realises that the old MPEG business model is now broke
And the entire post is about how dysfunctional MPEG is and how AOM rose to deal with it. It is tragic to waste so much time and money only to produce nothing. He's criticizing the MPEG group and their infighting. He's literally criticizing MPEG's licensing model and the leadership of the companies in MPEG. He's an MPEG member saying MPEG's business model is broken yet no one has a desire to fix it, so it will be beaten by a competitor. Would you not want to see your own organization reform rather than die?
Reminder AOM is a bunch of megacorps with profit motive too, which is why he thinks this ultimately leads to stalled innovation:
> My concerns are at a different level and have to do with the way industry at large will be able to access innovation. AOM will certainly give much needed stability to the video codec market but this will come at the cost of reduced if not entirely halted technical progress. There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.
> Companies will slash their video compression technology investments, thousands of jobs will go and millions of USD of funding to universities will be cut. A successful “access technology at no cost” model will spread to other fields.
Money is the motivator. Figuring out how to reward investment in pushing the technology forward is his concern. It sounds like he is open to suggestions.
> There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.
I don't think he fully considered the motivations of Alliance members like Google (YouTube), Meta and Netflix and the lengths they'll go to optimize operational costs of delivering content to improve their bottom line.
Fixing a business model that was always a force that slowed down development, implementation and adoption is not something that should be "fixed". MPEG dying is something to celebrate not whine about.
Could you please point to the whining? He says MPEG is broken, but AOM will stagnate. You’re mad at the messenger.
His argument is blatantly invalid.
He first points out that a royalty-free format was actually better than the patent-pending alternative that he was responsible for pushing.
In the end, he concludes that the that the progress of video compression would stop if developers can't make money from patents, providing a comparison table on codec improvements that conveniently omits the aforementioned royalty-free code being better than the commercial alternatives pushed by his group.
Besides the above fallacy, the article is simply full of boasting about his own self-importance and religious connotations.
The article does not give much beyond what you already read in the title. What obscure forces and how? Isn’t it an open standards non-profit organisation, then what could possible hinder it? Maybe because technologically closed standards became better and nonprofit project has no resources to compete with commercial standards? USB Alliance have been able to work things out, so maybe compression standards should be developed in similar way?
Supposedly the whole story is told in their linked book.
From Leonardo, who founded MPEG, on the page linked: "Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks."
Exactly. That passage only making it more confusing.
One detail for context: when “closing” MPEG, he also deleted all of its all pages and materials and redirected them to the AI stuff.
I... don't understand how AI related to video codecs. Maybe because I don't understand either video codecs or AI on a deeper level.
Every predictor is a compressor, every compressor is a predictor.
If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there.
In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.
The flip side of this is that all fields of compression have a lot to gain from progress in AI.
Also check out this contest: https://www.mattmahoney.net/dc/text.html
Fabrice Bellard's nncp (mentioned in a different comment) leads.
It has long been recognised that the state of the art in data compression has much in common with the state of the art in AI, for example:
http://prize.hutter1.net/
https://bellard.org/nncp/
Some view these as so interconnected that they will say LLMs are "just" compression.
Which is an interesting view when applied to the IP. I think it's relatively uncontroversial that an MP4 file which "predicts" a Disney movie which it was "trained on" is a derived work. Suppose you have an LLM which was trained on a fairly small set of movies and you could produce any one on demand; would that be treated as a derived work?
If you have a predictor/compressor LLM which was trained on all the movies in the world, would that not also be infringement?
MP4s are compressed data, not a compression algorithm. An MP4 (or any compressed data) is not a “prediction”, it is the difference between what was predicted and what you’re trying to compress.
An LLM is (or can be used) as a compression algorithm, but it is not compressed data. It is possible to have an overfit algorithm exactly predict (or reproduce) an output, but it’s not possible for one to reproduce all the outputs due to the pigeonhole principle.
To reiterate - LLMs are not compressed data.
It is like upscaling. If you could train AI to "upscale" your audio or video you could get away with sending a lot less data. It is already being done with quite amazing results for audio.
AI and data compression are the same problem, rephrased.
Which makes Silicon Valley, the TV show, even funnier.
holy shit it does. The scene with him inventing the new compression algorithm basically foreshadowed the gooning to follow local LLM availability.
May be, we are couple of years away from experiencing patent free video codecs based on deep learning.
DCVC-RT (https://github.com/microsoft/DCVC) - A deep learning based video codec claims to deliver 21% more compression than h266.
One of the compelling edge AI usecases is to create deep learning based audio/video codecs on consumer hardwares.
One of the large/enterprise AI usecases is to create a coding model that generates deep learning based audio/video codecs for consumer hardwares.
So what’s the take on his new organization MPAI? I don’t know much about writing codecs… would love to hear someone’s take on the organization.
https://mpai.community/standards/mpai-spg
This makes zero sense, right? Even if this was applicable, why would it need a standard? There is no interoperability between game servers of different games
“…and industry to exploit.”
And, boy howdy, they did.
Goodbye MPEG group, and to be frank, good riddance I think. I'm glad that open codecs are now taking over on the frontier of SOTA encoding.
Maybe these sorts of handshake agreements and industry collaboration were necessary to get things rolling in 198x. If so, then I thank the MPEG group for starting that work. But by 2005 or so when DivX and XviD and h264 were heating up, it was time to move beyond that model towards open interoperability.
I think if IP rights holders were mandated to pay property tax it would make the system much healthier.
This. You should have to declare the value of a patent, and pay 1% of that value every year to the government. Anyone else can force-purchase it for that value, but leaving you with a free perpetual license.
Wouldn’t that only help the “big guys” who can afford to pay the tax?
Presumably the tax would be based on some estimated value of the property, and affordability would therefore scale.
> The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies.
Copyright is cancer. The faster AI industry is going to run it into the ground, the better.
Does he talk about Fraunhofer there? The guys, subsidized by German taxpayers, starting to charge license or patent fees.
Or is it MPEG LA? https://wiki.endsoftwarepatents.org/wiki/MPEG_LA
This has nothing to do with copyright. It is an issue of patents.
I hate copyright too, but this is about patents. Software patents are also cancer.