Which Open Source Video Apps Use SMP Effectively? 262
ydrol writes "After building my new Core 2 Quad Q6600 PC, I was ready to unleash video conversion activity the likes of which I had not seen before. However, I was disappointed to discover that a lot of the conversion tools either don't use SMP at all, or don't balance the workload evenly across processors, or require ugly hacks to use SMP (e.g. invoking distributed encoding options). I get the impression that open source projects are a bit slow on the uptake here? Which open source video conversion apps take full native advantage of SMP? (And before you ask, no, I don't want to pick up the code and add SMP support myself, thanks.)"
ffmpeg (Score:5, Informative)
Use the -threads switch.
Re: (Score:2)
Re:ffmpeg (Score:5, Informative)
Re: (Score:2, Insightful)
Re: (Score:3, Informative)
And it may or may not be useful to actually rune more than one thread per kernel. It depends on the encoder and application how many threads you shall run, so the best is to test with 1, 2 and 4 threads per kernel.
Isn't that per-core, not per-kernel?
Re: (Score:2)
Re:ffmpeg (Score:5, Insightful)
Or just convert 2 videos at once, or 4 for a quad core etc. They did suggest they have lots to convert, and it's a pretty easy way to get all available cores working hard.
Re: (Score:3, Informative)
Yup, with separate disks to work on to remove (mostly) the disk i/o contention, just let each process run happily away.
Re: (Score:2)
That's exactly what I do. I also wrote a scheduler in Python that starts new jobs when the previous ones are completed. It keeps the number of running encoding processes equal to the number of processors/cores.
To get the optimal scheduling order, it figures out the length of each input file (using midentify from the mplayer/mencoder distribution), and then sorts the jobs so that the longest jobs will be processed first (it assumes that processing time is roughly proportional to input file length (in seconds
Re:ffmpeg (Score:5, Interesting)
That sounds like a lot of work... I just used make:
Then I just ran "make -j4". All four processors working like mad, with a minimal of effort.
(You may need to change the wildcard for your own scenario.)
Re: (Score:3, Insightful)
"Why is it not working? Oops messed up tabs and spaces", etc.
Re:ffmpeg (Score:5, Informative)
Re:ffmpeg (Score:4, Informative)
You're still missing the OP's point. Let me spell it out for you:
Say you have four videos to encode, and four cores.
1) You can either use one core at a time and encode one video at a time. Let's say that takes time T.
2) You can encode one video at a time, but use all four cores while doing it. Your total time is T/4.
3) You can encode four videos at a time, one on each core. Your total time is T/4.
The OP was advocating strategy #3. It's a fine approach.
Re: (Score:3, Insightful)
I thought about that but, seriously, transcoding is usually CPU limited. I'd really suspect it'd take a lot of simultaneous encoding to make it I/O bound.
Re:ffmpeg (Score:4, Informative)
I hit I/O throttling when I do the following:
* rip 2 dvds (two DVDR Drives)
* transcoding previous DVD rips to XVID
* Moving completed rips to server over 1 Gbps Ethernet link.
At this point I can see CPU load start to drop as PCI bus I/O saturates.
At no point do I hit disk I/O or memory limits.
Disks are non-RAID non-striped, but rips are to separate disks (thus DVDA rips to HDA DVDB to HDB) and server upload pulls from whatever disk is not currently transcoding (transcode file on HDA, when done start transcode on HDB and move file from HDA).
-nB
Re:ffmpeg (Score:4, Informative)
If I may offer a suggestion, I'm not too sure on what your setup is but on mine I have 2 DVD drives each on separate IDE buses and 2 SATA drives (also on separate buses) rip from the DVD to drive 1 and encode from drive 1 to 2. OF course it all depends on a variety of factors but using that certainly helped that.
Re:ffmpeg (Score:5, Informative)
Re: (Score:2, Interesting)
So why is threading off by default? In a CPU-intensive application like this, multithreading always makes sense, even on a single-core system.
Re:ffmpeg (Score:5, Informative)
No it doesn't the only time you want to use multi-threading in a single CPU environment is because asynchronous methods for IO are unavailable or the code would be too difficult to re-architect to use asynchronous IO. If the application is seriously IO bound threads can even make the situation worse by causing random IO patterns.
Ideally, the number of threads a program uses should be no more than the number of processors available. Otherwise, you are wasting time context switching instead of processing.
Re: (Score:2)
Ideally, the number of threads a program uses should be no more than the number of processors available. Otherwise, you are wasting time context switching instead of processing.
An exception to this kind of rule should really be made for graphical user interfaces. In the case of GUI applications, time wasted in context switching is less important than keeping the UI responsive and the user happy.
Any kind of heavy lifting (IO-blocking or otherwise) should really be done on a different thread than the one that is responsible for handling the user interface. This allows the user interface to stay responsive, providing the user with feedback (progress bar, time estimate, reassurance th
Re: (Score:2, Insightful)
Just want to inform you that threads nor any other
multiprogramming mechanisms are necessary for
responsive user interfaces,
and that IO multiplexing in particular does not require
threads at all.
You can solve both with threads, but you don't have to.
And in most common cases it is much better not to;
it seems that threads continue to be one of the most
misused and misunderstood of the programming concepts.
Re: (Score:3, Insightful)
Perhaps. But threads are far more versatile - if they're done well.
So our video app has a sound-processing thread, a video processing thread, and a UI thread. If it's implemented well (don't read or write twice, have a common buffer), it'll run with the same or near performance as a one-threaded program on a one-processor/core system.
But on a multicore/processor system no extra work is needed to take advantage of the cores. If we have three cores, it'll run automatically across cores for a massive performan
Re: (Score:3, Interesting)
Yes, you can use threads well. But with less effort (taking into account synchronization and debugging), you can make the asynchronous tasks independent programs instead of threads. Your video and sound processing threads sound like perfect candidates for being made into independent programs.
A task being an independent program affords several advantages. For example, it's easier to test an independent program, especially in a test harness. An independent program can be run by itself. And it's very clear wha
Re:ffmpeg (Score:5, Insightful)
On a two processor system this would result in multi-threading being off.
Re:ffmpeg (Score:5, Funny)
On a single core system this would result in not being able to run anything!
Re: (Score:2)
Threading is sometimes broken on the OS, or sometimes it varies between revisions.
FreeBSD for instance has been in the middle of changes to the threading system and there was a bug in the 6.x branch which wasn't in either 7.x or current. Defaulting to off if you're not sure how well threading is going to be handled is probably better than defaulting to something that could be broken.
Anybody who knows that they need threading and decides to turn it on is likely to know whether or not threading is broken. Or
Re: (Score:3, Informative)
Re:ffmpeg (Score:5, Informative)
Some may argue this is a good thing, but for the time being SMP is the way forward for faster processing as MHz has maxed out, in consumer PCS. So when they start buying octo-core CPUs they dont expect it to run at 1/8th speed by default.
I was also being a bit lazy. I could have checked up on each app in turn, but I asked /. instead.
Re: (Score:3, Informative)
I have not tried it. But e.g. k9copy uses mencoder.
So if you just put something like "x264ops=threads=auto" in you mencoder.conf file it might work also from k9copy.
k9copy also have a settings menu where you can tune options to mencoder for various codecs.
AcidRip patches (Score:3, Informative)
Re:ffmpeg (Score:5, Informative)
Re: (Score:2, Interesting)
Re:ffmpeg (Score:5, Informative)
True, but in most contexts, "PC" is the shortened form of IBM-compatible PC (which is really outdated), and is usually just stands for Windows these days.
Re:ffmpeg (Score:5, Insightful)
Apple has spent a lot of time and money convincing everybody that they don't sell PCs, they sell Macs. I'm not sure what the point of arguing with both the general public as well as Apple is.
At this point, the term PC does not include Apple computers. It's a change to the definition which happens when the vast majority of people decide amongst themselves that the definition should change.
In terms of the topic at hand, most video apps really should be capable of using multiple cores, tasks of this sort are quite easy to finish in parallel. Either by doing ever n frames or subdividing the image into a number of regions which can be completed separately and joined at the end before writing the frame to disk.
Re:ffmpeg (Score:5, Insightful)
No - HP did (for their calculators), way before there "was" an Apple.
Also, I don't even think Apple marketing would agree with you - or they wouldn't have "I'm a Mac... and I'm a PC" adverts.
Re: (Score:2, Informative)
Logic Node is somewhat better, however it only does audio, we have two eight core Mac pro's and three Xserv machines in our studio. The Xserve machines will be binned when the new version of Logic Pro supporting GPU processing the audio is out.
Re: (Score:3, Interesting)
Is creating a copy of my DVD for my Cowon D2 piracy now ?
Legally it probably is in many places since I'm probably not even allowed to read them on my PC (Linux), but still...
Re: (Score:2, Informative)
If you're making another copy of it to play on another device (format shifting or whatever bullshit term they used), yeah, you can probably get sued for it if some asshat wants to target you.
Illegal? No.
Wrong? Hell no.
My point is that encoding apps often exist separately from editing apps (such as FCP). This is due in large part to piracy, especially when talking about free/open encoders and sites like doom9.
Pirates are not concerned with editing/creating, they're concerned with copying and converting/co
transcode, of course! (Score:5, Informative)
x264 (Score:3, Insightful)
x264 use slices and scales pretty well across multiple cores. I use it on windows via megui, but you could easily use it in Linux as well. You could use mencoder to pipe out raw video to a fifo and use x264 to do the actual conversion, for instance.
Beat me to it! (Score:5, Informative)
x264 via meGUI from Doom9 is what I use to compress HD-DVD and BD movies - also on a quad core. I have some tutorials posted out and about on how I'm doing it. Near as I can tell you cannot dupe the process on Linux due to the crypto - Slysoft's AnyDVD-HD is needed.
Playback - I use XBMC for Linux. It is also SMP enabled using the ffmpeg cabac patch. the developers of this project have been VERY aggressive at taking cutting edge improvements to the likes of ffmpeg and incorporating them into the code. Since Linux has no video acceleration of H.264 SMP really helps on high bitrate video!
VisualHub... (Score:4, Informative)
Re: (Score:2, Informative)
Re:Which part of Open Source didn't you get? (Score:5, Informative)
OP is asking for open source tools. You cited a commercial one that doesn't provide source.
VisualHub (the front-end app) may be closed, but ffmpeg is LGPL.
And the GP was suggesting using ffmpeg, not VisualHub.
Re: (Score:2)
x264 and avisynth (Score:3, Informative)
Re: (Score:2)
Yeah x264 is great. There is a slight quality degradation (albeit you have to look really hard to visually determine the difference) if you use multiple threads.
I once used a batch file to encode several gigs of my family vacation MJPEG videos to H.264 using x264 in a single background thread over a period of 10 days.
With some heavy-duty post processing (for noise removal etc) it encoded about a 1 GB source/day. There was no perf. degradation with my other apps (games, email etc.) on account of the video en
Re: (Score:2)
ffmpegX for OSX uses x264 and it's transcoding like mad on my eight core Mac Pro. A 2h Video_TS film conversion to iPhone-ready double pass h264/MPEG4.. in less 20 minutes. Using 720-760% CPU, i.e. just the right ammount for me that uses the machine for other tasks as well.
Re: (Score:2)
It's only the GUI that's shareware, what I just told everyone was that the open source codec x264 is threaded and performing very good on SMP systems.
Re: (Score:2)
Thats not correct. The admin-rights are only needed to update Megui. Video encode works fine without admin permissions.
You can install MeGUI in a non-standard location like c:\tools\megui and not require admin permissions to update.
Re: (Score:2)
Load balancing: Why? (Score:5, Insightful)
don't balance the workload evenly across processors
Why is balancing the load evenly important, as long as one thread is not bottlenecking the others? Loading a particular core or set of cores might even be beneficial depending on the cache implementation, especially when other applications are also contending for CPU time.
Sure, a nice even load distribution might be an indicator for good design, but it doesn't have to apply in every case. I don't think software should be designed so you can be pleased with the aesthetics of the charts in task manager.
Re: (Score:2, Insightful)
Because, ideally, all four cores should be running at 100% -- the idea is to make maximal use of your available resources, right?
Re:Load balancing: Why? (Score:5, Insightful)
It's still possible to load all cores 100%.
A video decoder that I'm working with, for example, currently uses only as many threads as necessary for real-time playback. So for example if one core can do the job only one core is used. If the decoder looks like it might start falling behind more threads are given work to do. Ultimately, if your system is failing to keep up all cores will be fully leveraged.
However, so long as only some cores are required the others are 100% available to other processes, including their cache (if it's independent). I'm not sure how power management is implemented but perhaps it's even possible for the unused cores to do power saving, leading to longer batter life for laptops/notebooks, etc.
the idea is to make maximal use of your available resources, right?
No, the idea is to make the best use of your resources. I'm not trying to say that load balancing is wrong. I'm just saying that processes that don't appear to be balanced are not necessarily poorly designed or operating incorrectly.
Handbrake (Score:5, Informative)
Re:Handbrake (Score:5, Informative)
that's because Handbrake uses ffmpeg
Re: (Score:3, Informative)
Re: (Score:2)
But the program will NOT transcode from .VOB to .DV. That's all I want to do. I want to point Handbrake to a VOB and have it transcode direct to .DV, particularly Final Cut-friendly .DV. Yeah I'm on a Mac. MacBook Core 2 Duo (Merom) 2GHz. I converted from .VOB to .MKV, then I took the .MKV into VisualHub. The transcode in VisualHub died silently towards the end. Fail.
There has GOT to be a better way. On Mac. I'm willing to learn command line apps to do this if I can take a .VOB and convert direct to .DV.
F(next) = F(current) + Delta(F(current:next)) (Score:5, Insightful)
The problem with MPEG encoding and decoding is that the data itself is not well suited to multi-threaded analysis.
Multi-threading is most efficient when it is applied to discrete data sets that have little or no dependency on each other.
For example, suppose I have a table with four columns -- three holding input values (A, B, and C) and one holding an output value (X). If the data in a given row of the table has nothing to do with the data in any other row, multi-threading works efficiently, because none of the threads are waiting for data from any of the other threads. If I want to process multiple rows at once, I simply spawn additional threads.
On the other hand, for data such as MPEG video, the composition of the next frame is equal to the composition of the current frame, plus some delta transformation - the changed pixels.
This introduces a dependency which precludes efficient multi-threaded processing, because each succeeding frame depends on the output of the calculations used to generate the prior frame. Even if more than one core is dedicated to processing the video stream, one core would wind up waiting on another, because the output from the first core would be used as the input to the second.
Re: (Score:3, Informative)
Note that the above example is about the video component only of a single MPEG audio/video stream.
There is no reason that an encoder/decoder can't process audio in one thread and video in another, thereby using more than one core (which has already been discussed in other posts relating to this article).
keyframes (Score:5, Informative)
Actually, the MPEG stream resets itself every n frames or so (n is often a number like 8, but can vary depending on the video content). These are called keyframes (K) and the delta frames (called P and I frames) are generated against them. Because of this, it is really easy to apply parallel processing to video encoding.
Re:keyframes (Score:5, Informative)
Actually, the MPEG stream resets itself every n frames or so (n is often a number like 8, but can vary depending on the video content).
That is not true for MPEG-4 unless you have specifically constrained the I/IDR interval to an extremely short interval, and doing so severely impacts the efficiency of the encoder because I-frames are extremely expensive compared to other types.
Keyframes are usually inserted when temporal prediction fails for some percentage of blocks, or using some RD evaluation based on the cost of encoding the frame. Therefore unless the encoder has reached the maximum key interval the I frame position requires that motion estimation is performed, and thus you can't know in advance where to start a new GOP.
In H.264 due to multiple references you would certainly have issues to contend with since long references might cross I-frame boundaries, which is why there is the distinction of "IDR" frames, and this would certainly not be possible threading at keyframe level.
Granted, for MPEG1&2 encoders threading at keyframes is a possibility, although still not one I'd personally favor.
Re: (Score:2)
Yes, but you would only need one keyframe per cpu/core.
E.g. on a dualcore let one core encode the first half and the other core the second half.
Re:keyframes (Score:4, Informative)
Re: (Score:2)
I agree that MPEG4 can be easily multithreaded, but it is not threaded by entire GOPs, as the GP suggests, in any encoder that I know of for the reasons I gave. Frame-level and slice-level threading are the two common techniques. I do actually work on MPEG-4 codecs.
Re: (Score:2)
How did you get a +5 Informative when you're wrong?
First off, which MPEG spec has a K-frame? An I-frame is not a delta frame, it's more like your "keyframe." P and B are the delta frames.
Secondly, there's very little to parallelize if you're working with open Groups of Pictures (GOP), that is to say every GOP references into the next GOP. If you have closed GOPs, then you can do this a little better by putting the next GOP on another core/CPU.
But will you gain a significant speedup? The problem is not just
Re: (Score:3, Informative)
Slight correction: in MPEG, the keyframes are called I-Frames. The delta frames are B and P frames. Most MPEG2 encoders that I have used default to a 15 frame GOP.
Re:F(next) = F(current) + Delta(F(current:next)) (Score:5, Insightful)
i'd think that n-1 cores/threads/whatever to process the chunked data, and the last core/thread/whatever to handle overhead and i/o scheduling would run pretty nicely on a multi-core machine.
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2, Informative)
http://en.wikipedia.org/wiki/Group_of_pictures [wikipedia.org]
You can encode GOPs independently. I think the only dependency between GOP encoding processes is bit allocation, which probably works well enough if you simply assign each process an equal share of the total bit budget.
Re: (Score:3, Insightful)
I think the only dependency between GOP encoding processes is bit allocation, which probably works well enough if you simply assign each process an equal share of the total bit budget.
Is this even needed if you use multi-pass encoding? At least for XviD, IIRC the first pass is used to accumulate statistics used to allocate the proper bit budget to each frame. Then the individual processes should be able to use the statistics file from the first pass to get the bit allocation for their current GOP in the second pass.
Re: (Score:3, Insightful)
You can encode GOPs independently. I think the only dependency between GOP encoding processes is bit allocation, which probably works well enough if you simply assign each process an equal share of the total bit budget.
That's a pretty painful constraint for anything other than very flat constant bitrate encoding. You really want to be able to move bits between GOPs to optimize for consistant quality.
Re:F(next) = F(current) + Delta(F(current:next)) (Score:5, Insightful)
You could of course split each frame in slices, and process these in parallel. Or skip the video N frames between each core, with N being the number of frames between MPEG keyframes. Or have core 1 do the luma and core 2 and 3 the chroma channels. Or pipeline the whole thing and have core 1 do the DCT, core 2 the dequant etc. and have core 3 reconstruct the output reference frame while core 1 already starts the next frame.
Plenty of ways to parallelize decoding, and even more for encoding...
Re: (Score:2)
The problem with MPEG encoding and decoding is that the data itself is not well suited to multi-threaded analysis.
Not quite true. Someone above already explained some of this re VisualHub.
The video data/frame at 0:00 is very likely completely unrelated to the data/frame at 5:00, thus you can simply chop up the raw file into a number of segments and process them in parallel.
Some clever stitching is probably required to put the whole thing back together in the end.
Multi-threading is most efficient when it is applied to discrete data sets that have little or no dependency on each other.
Exactly, so you chop up the raw input into segments and they become discrete data sets.
Re: (Score:2)
But MPEG has keyframes - you need them for scene changes and error recovery. There's one at least every few seconds. For offline video, the threads can work on different keyframes & their respective deltas.
For online video, it's harder.. but still can be done. Similar to how two-videocard setups work, you can split the image into pieces and have each CPU work on a particular piece, since there's little relation between . Of course it becomes very hard to scale beyond a certain point... but 2-4 cores/CPU
Re: (Score:2)
You can also break down a frame into discrete rectalinear regions and have each separate region be performed by various threads. This block-based approach is a simple (although probably not 100% optimal) way to get parallelism with an operation involving only two buffers (current and output) in any 2D filter / transform from MPEG frame decoding to JPEG decompression to a Photoshop filter.
Fo
Re: (Score:2)
> Most video compression techniques including MPEG set a maximum number
> frames between base frames. A base frames can be decoded without any
> information about previous or future frames.
>
> All the motion vectors or deltas are calculated against the closest
> previous base frame.
Yeah, I forgot to include the whole keyframes thing... My bad. I should have said "not always."
However, the problem with keyframes is that their placement is often artificial; i.e., every 30 frames or so.
This limi
Max CPU? (Score:2)
Huh? I am using AGK and my CPU never does anything. It is always waiting for I/O. I must be doing something wrong...
Re: (Score:3, Interesting)
With video conversion faster storage (not low latency) is the big winner, with huge cache being a close second.
If you want the fastest video encodes with no care to cost, get an 8-way pci-e raid card and 8 laptop sata HDD, small and very very fast in a stripe raid.
What about playing? (Score:2)
Re: (Score:2)
Re: (Score:2)
With the windows media player classic, you can indeed have your video hardware speed things up, it does it by making a 2 triangle direct3d window and rendering the video stream as a texture, with today's low end video cards this takes a load off the CPU having to do overlays in 2d windows.
Also (and I know its not OSS) but corecodec does a great job, ffmpeg under windows is very bad at threading h.264 content, to the point where a fast AMD dual core will struggle with 1080p, but corecodec plays it back smoot
MPEG Algorithm (Score:2)
The mpeg algorithm is called DCT Cosine. If this is parallaizable, then mpeg encoding/decoding should be, although there is no way a general processor can beat an asic in silicon.
Windows? VirtualDub 1.8.x + ffdshow-tryouts (Score:4, Informative)
You don't say if you're running on Windows or Linux or something else. If you are running on Windows, the latest versions of VirtualDub have made big improvements to SMT/SMP encoding.
VirtualDub home [virtualdub.org]
VirtualDub 1.8.1 announcement [virtualdub.org]
VirtualDub downloads [sourceforge.net]
Make sure you grab 1.8.3 - 1.8.1 was pretty good, but had a few teething problems. 1.8.2 has a major regression which is fixed in 1.8.3. The comments in the 1.8.1 announcement contain a few important tips for using the new features (some of which I posted BTW).
The two major new features that would be of interest to you are:
1. You can run all VirtualDub processing in one thread, and the codec in another. This works very well in conjunction with a multi-threaded codec - this one change improved my CPU utilitisation from approx 75% to 95% on my dual-core machines - with an equivalent increase in encoding performance.
2. VD now has simple support for distributed encoding. You can use a shared queue across either multiple instances of VD on a single machine, or across multiple machines (must use UNC paths for multiple machines). Each instance of VD will pick the next job in the queue when it finishes its current job. Instances can be started in slave mode (in which case they will automatically start processing the queue).
I use 3 machines for encoding (all dual-core). With VD 1.8.x I start VD on two of the machines in slave mode, and one in master mode. I add jobs to the queue on the master instance, and the other two instances immediately pick up the new jobs and start encoding. When I've added all the jobs, I then start the master instance working on the job queue.
To achieve a similar effect on your quad-code, start two instances of VD on the same machine - one slave, the other master.
It's not perfect (if you've only got one job, you won't use your maximum capacity) but it has greatly simplified my transcoding tasks, and reduced the time to transcode large numbers of files.
Re: (Score:2)
Holy shit! Somehow I missed all these VDub releases. Thanks for the notice.
Out of interest, what sort of stuff are you encoding from/to? Are you aware of any mpeg4/h264 codecs that will work happily in Virtualdub?
avidemux (Score:5, Informative)
I've noticed a lot of talk about commandline options, but not the nice guis that use them. Avidemux is open source, cross-platform, gives you a decent interface, and uses multithreaded libraries like ffmpeg and x264 on the backend to do the encoding, so it generally makes optimal use of your multicore system.
Also consider this. (Score:2, Interesting)
Not as simple as you would think (Score:5, Insightful)
As other commenters have said, decoding video is not, per se, a trivially parallelized algorithm. Especially for modern codecs with lots of temporal encoding. MJPEG would be easily parallelized, buy you'd have to be dealing with fairly ancient sources...MediaComposer 1 for instance.
However, there are different classes of "video app" that are good targets for parallelization. Real world video editing for instance: consider multiple streams of video with overlays, rotations, effects etc. Video and audio decoding can happen in parallel, you can pipeline the effects stages so that each effect is handed off to another core. Modern video editing systems do this with aplomb.
I'm from the commercial end of this so, I can't comment much on open source alternatives. But I will say that a lot of the algorithms in certain products are highly tuned to the particular CPU type.
And they're smart enough to distribute work across only as many cores as actually exist.
Finally. Don't forget that optimization is hard. You have to consider the speed of the hard drive, the cost of sharing data between threads and cpu caches and a bunch of other real constraints. Any half decent cpu of the last five years or so can easily decode most video faster than it can be read and written to disk. So long as this is true, you won't get any benefit from parallelization.
heroinewarrior.com (Score:3, Informative)
The version of Cinelerra from heroinewarrior.com uses SMP. It's highly dependant on the supporting libraries & who implemented the feature. In the worst case, use renderfarm mode & nodes for each processor. Sometimes the libraries work in SMP mode & sometimes they don't. Sometimes the feature was intended for everyone to use on any number of processors & sometimes it was written for 1 person's cheap single processor.
Hmm (Score:2)
Now I'm a bit curious.
Given that all of the "usual suspects" of encoding apps support SMP on almost every platform, and have done so for quite some time, what was this guy using that didn't support it?
ffmpeg and x264 are just about the only players in town these days.
Re: (Score:2)
Multi core no....
But unix apps have been running on multi processor systems for years, and geeks have had access to such systems for years too. I did video encoding in 2000 on a quad cpu alphaserver and a dual cpu sparc, but i just did as someone else suggested and ran multiple encodes simultaneously.
Re:Simple... (Score:5, Informative)
Re: (Score:2, Informative)
How the hell is this modded interesting (as opposed to informative)?
Do people really not know this stuff (thus making it interesting to them)?
For the gp and the others who still don't get it.
Multi-threaded programming (getting your shit to run in separate threads) is easy, now.
Multi-threaded / distributed algorithms (getting your shit to do some coherent, useful shit while scaling well) are not easy at all.
Re: (Score:2, Insightful)
If you truly understand the problem domain you are operating in, parallelism becomes readily apparent. Implementing it isn't difficult even on old code, again, if you truly understand where the parallelism exists.
Re: (Score:3, Insightful)
Exactly. Too many people assume that any given programmer can write any given program. What isn't generally realized (at least by the masses) is that programming really is about acquiring expertise in a particular domain and then solving problems in that domain through the use of computer programs. Generally some of the most effective programs I've seen have been written, on their first pass, by a person with intimate domain knowledge, and mediocre programming/computer knowledge. The program then become
Re: (Score:2)
It's refreshing to see that, rather than having us all answer questions and think about it, only to THEN find out he doesn't want to do any work.
Re: (Score:2)
And FWIW I have contributed patches in the past to both the avidemux AND nzbget prejects , and they have been accepted, but these have addressed more trivial aspects of the software.
Re: (Score:2)
Re: (Score:3, Informative)
But Mac users have been living with SMP since 2001
Just for reference:
UNIX System V R4-MP 1993
Windows NT 1993
OS/2 2.11 1993
Linux 2.0 1996