bit-tech.net

Intel Core i7-5930K and Core i7-5820K Review

Comments 1 to 23 of 23

Reply
Spreadie 3rd September 2014, 13:33 Quote
Has there been a snafu with the game benchmarks, they're surprisingly uniform? ;)
jrs77 3rd September 2014, 13:34 Quote
The 5820k doesn't look too bad actually, especially considering the pricetag of it.

The only thing that makes a 4790k a more reasonable choice is the pricetag of the motherboards and the RAM that go along with the system and ofc the 60W difference in TDP.

EDIT: Quick check... 5820k + µATX-board + 16GB DDR4-2400 ~ €850 vs. 4790k + µATX-board + 16GB DDR3-1600 ~ €550

And then you'd need a dedicated GPU for the 2011-3 system ofc.

Anyways... really tempted by that 5820k instead of upgrading to an 4790k. Guess I'll still wait for Skylake and have a look at the reviews before I buy a new system. Another year of waiting and holding out :/
enbydee 3rd September 2014, 13:47 Quote
Quote:
Originally Posted by Spreadie
Has there been a snafu with the game benchmarks, they're surprisingly uniform? ;)

It was the same story with 5960X review; GPU limited even on lower settings with a 780 apparently.
loftie 3rd September 2014, 13:48 Quote
What's going on with the idle power consumption graph? You've got two entries for both CPUs being tested.
SpAceman 3rd September 2014, 13:51 Quote
You might need to go for some games that are a bit more CPU reliant.
SchizoFrog 3rd September 2014, 13:58 Quote
Second to last sentence should be i7-4790K not 4970K.
Combatus 3rd September 2014, 14:40 Quote
Quote:
Originally Posted by loftie
What's going on with the idle power consumption graph? You've got two entries for both CPUs being tested.

Please read the entire article, specifically the overclocking section ;) If you run at 2,133MHz on the RAM, as we did with the extra tests, you can run the motherboard completely at default settings. If you use 2,600MHz and above, then this automatically changed the CPU strap to 125MHz - the short story being, those stock power readings will be higher, so we've included both numbers so you can see what happens when using 2,133MHz RAM and anything above 2,666MHz.
Combatus 3rd September 2014, 14:41 Quote
Quote:
Originally Posted by SpAceman
You might need to go for some games that are a bit more CPU reliant.

Yep we've had a few comments about this. We were stuck for time getting these out but we'll be looking to include a game that's more CPU dependent and/or dropping another card in for SLI/CrossFire when we get around to looking at the motherboards.
Combatus 3rd September 2014, 14:43 Quote
Quote:
Originally Posted by SchizoFrog
Second to last sentence should be i7-4790K not 4970K.

Good spot! thanks!
Maki role 3rd September 2014, 15:59 Quote
Gah I'm still frustrated by the whole 5960X being the 8-core deal. I don't see much of a point in upgrading from one 6-core to another (coming from a 3930k). The extra cores would definitely be handy for my usage, but whether it's worth not only the cost of the CPU, but the mobo and RAM is another point entirely. Something just feels odd about upgrading the RAM, but not increasing the capacity. At current prices, anything more than 32GB would simply be monstrously expensive.

Also, seems Mod of the Month is lagging behind again :( Please don't let the same thing happen as last year where we ended up missing months.
DbD 3rd September 2014, 15:59 Quote
Quote:
Originally Posted by jrs77
The 5820k doesn't look too bad actually, especially considering the pricetag of it.

It's the first Intel 6 core system that's not stupidly expensive. Shame it doesn't o/c particularly well so ipc isn't as good as the cheap 4 cores we are all using.
loftie 3rd September 2014, 17:46 Quote
I should probably read the entire article, but I generally don't read the overclocking section and I wouldn't expect the pretty graphs to be explained there. Maybe add an NB by the graphs? Got more questions
GuilleAcoustic 3rd September 2014, 21:40 Quote
Good performance for a single CPU, but too expensive and requires dedicated gpu. for the price I still prefear to build an ITX render farm. I keep thinking that the future of computing is in offloaded computations.
jrs77 3rd September 2014, 21:58 Quote
Quote:
Originally Posted by DbD
It's the first Intel 6 core system that's not stupidly expensive. Shame it doesn't o/c particularly well so ipc isn't as good as the cheap 4 cores we are all using.

I'm looking for a silicon used primarily for CPU-based 3d-rendering and that's where it delivers with it's +50% of threads.
Quote:
Originally Posted by GuilleAcoustic
Good performance for a single CPU, but too expensive and requires dedicated gpu. for the price I still prefear to build an ITX render farm. I keep thinking that the future of computing is in offloaded computations.

I'm torn myself there Guille, as I'm not too keen of using a dedicated GPU or a bigger than mITX-system. However, this one's actually quiet competetively priced and a render-node doesn't perform as good.

One of these 5820k could actually improve my current rendering-time by 200% and I've to decide between a €500 4790k-based system or a €800 5820k one. €300 isn't that much actually, given that the 5820k has 50% more threads to render on.

I could use my current i5-3450 as a render-node for sure, but the 5820k would still beat the combination of an i7-4790k as primary and an i5-3450 as a render-node... and I'd use actually less power with the 5820k as a solo-system.

EDIT: I don't know how good the 4-core Celeron J1900 performs as a rendering-node, but if it's somewhat acceptable, then maybe four of them could be used as rendering-farm drawing only 15 Watt each under load and costing only €70 per board+CPU.

EDIT 2: Just looked it up, and the 16 threads of four J1900 would just equal a single 4770k in Cinebench with it's 8 threads. Same goes for the Athlon 5350, which doesn't perform any better than the J1900 while using double the power.
Combatus 3rd September 2014, 22:35 Quote
Quote:
Originally Posted by loftie
I should probably read the entire article, but I generally don't read the overclocking section and I wouldn't expect the pretty graphs to be explained there. Maybe add an NB by the graphs? Got more questions

No problem, I've just added this to the idle graph description!
GuilleAcoustic 3rd September 2014, 22:58 Quote
Quote:
Originally Posted by jrs77
I'm torn myself there Guille, as I'm not too keen of using a dedicated GPU or a bigger than mITX-system. However, this one's actually quiet competetively priced and a render-node doesn't perform as good.

One of these 5820k could actually improve my current rendering-time by 200% and I've to decide between a €500 4790k-based system or a €800 5820k one. €300 isn't that much actually, given that the 5820k has 50% more threads to render on.

I could use my current i5-3450 as a render-node for sure, but the 5820k would still beat the combination of an i7-4790k as primary and an i5-3450 as a render-node... and I'd use actually less power with the 5820k as a solo-system.

EDIT: I don't know how good the 4-core Celeron J1900 performs as a rendering-node, but if it's somewhat acceptable, then maybe four of them could be used as rendering-farm drawing only 15 Watt each under load and costing only €70 per board+CPU.

EDIT 2: Just looked it up, and the 16 threads of four J1900 would just equal a single 4770k in Cinebench with it's 8 threads. Same goes for the Athlon 5350, which doesn't perform any better than the J1900 while using double the power.

Same issue here. I've just moved from a Q6600 with a geforce 9300 IGP to an i5-4570 with GTX770. While it is a lovely machine, especially with a small case like the lian li PC-V353 ... I still think it's too big.

I remember learning 3D at school, 10 years ago, on a single core Athlon 64 with 1GB of RAM and a geforce 4 Ti 64MB. Learned to model / animate using low poly and then using subdiv for preview / rendering. My only mater was that rendering my 3 minutes animation for my diploma took a full month 24/7 to render, 100% CPU. I was a student back then and I had to live without the computer for a month.

The idea of having rendering nodes is primarily to offload the main rig, thus you can still work on something while the farm renders a scene (even if it renders slower than the main rig). Plus, it is expandable.

I've been looking at FPGA based "caustic" card to help with previewing on the main rig. You could thus have a fast cpu (modelers hardly use more than 1 core when modelling) with the IGP and use the FPGA for ray traced previews. Only problem actually is the price: $1500 for the dual FPGA / $800 for the single FPGA

http://santyhammer.blogspot.fr/2012/12/imaginationcaustic-graphics-r2500r2100.html

ufbDH5qbRCE
Maki role 3rd September 2014, 23:20 Quote
I've simply decided that GPU based rendering is the way forward for my work. Usually I can compress it all down to within 6GB, which means I can use a pair of Titans. They absolutely spank my CPU in terms of speed, a 30 minute render may only take a couple minutes using them, which in my book is phenomenal. The handy thing is that GPU memory is continuing to rise at a rapid rate. I wouldn't be surprised if the Maxwell Titan equivalent features 12GB much like the K6000 does now. When GPU VRAM stacks are approaching system memory quantities, you know things are changing. I for one hope this trend continues.
theshadow2001 3rd September 2014, 23:30 Quote
I have to question the usefulness of the gaming benchmarks. At least the older reviews used CPU bound games like skyrim and total war. Why bother test something in a manner that doesn't show the differences between the CPUs? Look at Tech Reports coverage of the 5960x for gaming benchmarks that actually highlight the differences between CPUs.
jrs77 3rd September 2014, 23:48 Quote
There's not much render-engines that use the GPU tho, especially not affordable or free ones. Additionally, a single Titan GPU costs more than an i7-5820k + MoBo + 16GB DDR4.

I'm not rendering animations or 100% fotorealistic images, but only stills to illustrate websites, brochures, flyers etc without having to rely on stock-images. I'm using mainly Blender and DAZ where I can quickly throw scenes together and then render them via LuxRender in a somewhat decent quality that strikes as realistic at first glance.
It's quiet alot of images tho and since I started doing this my customers are asking more and more for this kind of images/illustrations, as it's waaaaay cheaper then licensing stock images or taking photoshoots.
These images take usually some 30-60 minutes to render on my current system, and cutting these rendering-times down to 10-15 minutes per image would allready do the trick for me.

However, I'm looking to build a new system capable of rendering my stuff in an adequate time, but at the same time I want it to be as small, silent and economical as possible.
loftie 4th September 2014, 00:32 Quote
Quote:
Originally Posted by Combatus
No problem, I've just added this to the idle graph description!

Awesome. So the way I understand it, running the ram at 2666MHz auto overclocks the CPU by pushing the bclk up to 125MHz so you guys had to lower the bclk back down to 100MHz? This pushes the power consumption up at idle, but not load, and there's no change in voltage?
Vallachia 4th September 2014, 03:33 Quote
16/8/4 is not the only option for multi GPU setups with the 5820K. 8/8/8/4 is also achievable, but so far when looking at the fine print of motherboard manuals it seems Gigabyte are the only ones offering 8/8/8/4 lane splitting. Neither Asrock or MSI allow it, not sure about Asus yet.

To be clear, the motherboard must support 8/8/8 with the 5820K by including additional clock gens. The only mobo's I can say for sure support this are GA-X99-UD4 and GA-X99-UD3.
Maki role 4th September 2014, 13:50 Quote
Quote:
Originally Posted by jrs77
There's not much render-engines that use the GPU tho, especially not affordable or free ones. Additionally, a single Titan GPU costs more than an i7-5820k + MoBo + 16GB DDR4.

I'm not rendering animations or 100% fotorealistic images, but only stills to illustrate websites, brochures, flyers etc without having to rely on stock-images. I'm using mainly Blender and DAZ where I can quickly throw scenes together and then render them via LuxRender in a somewhat decent quality that strikes as realistic at first glance.

I thought Luxrender now supports GPU acceleration? Since you're using blender, I'm surprised that you haven't looked into using Cycles. It's now incredibly proficient, featuring GPU accelerated particles etc, although SSS is still CPU I believe (not for long I imagine). To me, the boost isn't just in the actual render time, but in my workflow. Being able to alter materials in real time in the viewer is insanely efficient, it must have cut a lot of my work in half, time wise. In the same way, it works brilliantly for posing, setting up scenes and lighting as you can instantly see a decent quality representation of the final image. The best part about cycles though is how it supports simply so many GPUs. I use Titans because they have stacks of VRAM, but it works with plenty of other cards too, they even have OpenCL support making a come back. Obviously it depends on individual usage, but Cycles is fast become a serious rendering option.
jrs77 4th September 2014, 14:41 Quote
I don't know. LuxRender supports GPU yes, but for final render it doesn't seem to do it. Maybe I miss something there.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums