I tried to use SRIOV to virtualize mellanox nics with vlans on redhat Linux. Long story short it did not work. Per Nvidia the os has to also run open switch. This work was on an already complex setup in finance ... so adding open switch was considered too much additionally complexity. This requirement is not something I run across in the docs.
The situation in networking is a lot different than graphics. I don't know much other than that it depends on what specific protocol, card, firmware, and network topology you're using and there's not really generic advice. If the question is setting up Ethernet switching inside the card so VFs can talk to the network, then I think the Linux switchdev tools can configure that on their own without Open vSwitch but you probably need to find someone who understands your specific type of deployment for better advice.
Depending what you're doing AMD's support for VirtIO Native Context might be a useful alternative (I think it gives less isolation which could be good or bad depending on use).
838 seems to be the real INT8 TOPS number for the 5090; going from 800 to 3400 takes an x2 speedup for sparsity (so skipping ops) and another x2 speedup for FP4 over INT8.
So it's closer to half the speed than a tenth. Intel also seems to be positioning this card against the RTX PRO 4000 Blackwell, not the 5090, and that one gets more like 300 INT8 TOPS. It also has less memory but at a slightly higher bandwidth. The 5090 is much faster and IIRC priced similarly to the PRO 4000, but is also decidedly a consumer product which, especially for Nvidia, comes with limitations (e.g. no server-friendly form factor cards available, and there are or used to be driver license restrictions that prevented using a consumer card in a data center setup).
Thank you for the correction. That seemed way too lopsided to be believed. This assessment balances the memory to tops ratio much much more evenly, which is to be expected! I was low key hoping someone would help me make sense of how wildly disparate figures were, but I wasn't seeing.
AMD R9700 is 378/766 tops int8 dense/sparse. 644GB/s of 32GB memory. ~$1400. To throw one more card into the mix. Intel undercutting that nicely here.
You're right that for companies, the pro grade matters. For us mere mortals, much less so. Features like sr-iov however are just fantastic so see! Good job Intel. AMD has been trickling out such capabilities for a decade (cards fused for "MxGPU" capability) & it makes it such an easier buy to just offer it straight up across the models.
especially for exploratory work 1/10th the perf is fine. Intel isn't able to compete head to head with Nvidia (yet), but vram is capability while speed is capacity. There will be plenty of use cases where the value prop here makes sense.
If you stick with your OS/package manager-distributed version, installation isn't painful anymore (provided that version approximately overlaps with your generation of GPU). It's okay for inference, and okay for training if you don't stray too far beyond plain torch. If you want to run code from a paper or other more esoteric stuff you're still going to have a bad time.
The product would be excellent in 2024, but now it's a landfill filler. You can run some small models at pedestrian speed, novelty wears off and that's it.
Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.
I wasn't even aware they ever _really_ released the B60. When I got bored of paying attention it was ~months after "release" and they just didn't exist to buy. I do technically see them on ebay, so yeah apparently they're out there.
The B60 was released but strictly on a B2B basis until a couple of months ago. The B60 dual, a much rarer bird, was scalped heavily enough to be unobtainable.
Running dual Pro B60 on Debian stable mostly for AI coding.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
This is a fp16 model. That's 54G in weights. I can load it only with fp8 quantization enabled (>= 128k context). I run into this error during generation though: https://github.com/vllm-project/vllm/issues/36350. Looks like an issue with the flash attention backend. But yeah, if you are OK with fp8 quantization on this model, it fits. I expect with 64G VRAM it will fit without quantization
There was the video a little while back where LTT built a computer for Linus Torvalds and they put an Intel Arc card inside, so I'd imagine Linux support is at the very least, acceptable.
Ive ran arc on fedora for years and for general desktop use it’s been perfect. For llm’s/coding it’s getting better but it’s rough around the edges. Had a bug where trying to get vram usage through pytorch would crash the system, ect.
Quicksync doesn't do its work on the CPU, it does the work on the integrated GPU. Their processors that did not have on-board graphics did not have Quicksync support. See their P series and many of their Xeon parts which do not carry Quicksync support, while the versions with integrated graphics do have it.
AMD chips that have integrated GPUs (their APU series of chips) often do have support for hardware video encoders. Because, once again, its a function of the GPU and not the CPU.
I think this shows a shift in model architecture. MOE and similar need more memory for the compute available than just one big model with a lot of layers and weights. I think this is likely a trend that will accelerate. You build the trade-off in which encourages even more experts which means more of a tradeoff, so more experts.....
Since they fired the entire Arc team and a lot of the senior engineers already updated their Linkedins to reflect their new positions at AMD, Nvidia, and others, as well as laying off most of their Linux driver team (GPU and non-GPU), uh...
Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
111 comments
~$1000 for the Pro B70, if Microcenter is to be believed:
https://www.microcenter.com/product/709007/intel-arc-pro-b70...
https://www.microcenter.com/product/708790/asrock-intel-arc-...
https://www.bhphotovideo.com/c/product/1959142-REG/intel_33p...
When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.
Anybody know better?
So it's closer to half the speed than a tenth. Intel also seems to be positioning this card against the RTX PRO 4000 Blackwell, not the 5090, and that one gets more like 300 INT8 TOPS. It also has less memory but at a slightly higher bandwidth. The 5090 is much faster and IIRC priced similarly to the PRO 4000, but is also decidedly a consumer product which, especially for Nvidia, comes with limitations (e.g. no server-friendly form factor cards available, and there are or used to be driver license restrictions that prevented using a consumer card in a data center setup).
AMD R9700 is 378/766 tops int8 dense/sparse. 644GB/s of 32GB memory. ~$1400. To throw one more card into the mix. Intel undercutting that nicely here.
You're right that for companies, the pro grade matters. For us mere mortals, much less so. Features like sr-iov however are just fantastic so see! Good job Intel. AMD has been trickling out such capabilities for a decade (cards fused for "MxGPU" capability) & it makes it such an easier buy to just offer it straight up across the models.
I don't have an Intel dGPU, but I suspect the situation there is even worse. I mean you go to the torch homepage: https://pytorch.org/get-started/locally/ and Intel isn't even mentioned. (It's here though: https://docs.pytorch.org/docs/stable/notes/get_start_xpu.htm...)
Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.
32GB? Meh.
Announce all you want, if you don't ever ship anything I could buy, who gives a shit.
They let people have the B50 but only released the B60 late in the cycle.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
> small fp8 local model with almost 100k token context
Would not fit Qwen3.5 27B would it? That's the SOTA
[1] https://www.youtube.com/watch?v=mfv0V1SxbNA
> they put an Intel Arc card inside
just add a little bit:
linus requested the card be intel as well.
AMD chips that have integrated GPUs (their APU series of chips) often do have support for hardware video encoders. Because, once again, its a function of the GPU and not the CPU.
Now can we have a 64gb B70 that’s worldwide available and not marked to unicorns like the Maxsun B60 Dual model has been?
WTF?
> Intel will provide certified drivers for Windows 11, Windows 10, and Linux.
Windows 11, OK. Linux, OK. But why Windows 10 for a new product?!