bit-tech.net

AMD Kaveri APU details and release date announced

AMD Kaveri APU details and release date announced

AMD Kaveri APU

AMD has just kicked off its annual developer conference, APU13, and to get the ball rolling it has revealed a host of details about its upcoming APU, codenamed Kaveri, with the new chip set to arrive on 14 January 2014.

Kaveri will be the company's first chip to completely unify both CPU and GPU together on one chip, an approach AMD is calling Heterogeneous System Architecture (HSA).

The key to HSA is that both the CPU and GPU, for the first time, share the same memory space and have equal flexibility to create and dispatch workloads. This is in contrast to previous APUs that have required workloads that need GPU processing to be copied back and forth to the GPU memory space.

Also revealed are details of what the highest-spec chip will be. Kaveri will have up to 4 CPU cores (2 modules) that will be based on the company's existing CPU architecture, Steamroller. It will also feature a GPU composed of 8 GCN 1.1 Compute Units (CUs), making for a Stream Processor (SP) count of 512, or the equivalent of a Radeon HD 7750 desktop card.

AMD claims these numbers equate to a floating point processing figure of 856GFLOPS, which Anandtech has worked out via PCWorld will mean it is called the A10-7850K, clocked at 3.7GHz and with a GPU speed of 720MHz.

Also confirmed are that AMD's proprietary GCN API, Mantle, will be supported by Kaveri and that its new TrueAudio technology will also be incorporated into the chip.

The January 14 launch will, in contrast to previous APU launches, be for desktop FM2+ chips first with mobile parts to follow soon after. The launch will be the week after the CES trade show, where AMD is expected to provide further details about the new chips, including pricing and specific SKUs.

AMD Kaveri APU details and release date announced

AMD Kaveri APU details and release date announced

AMD Kaveri APU details and release date announced

AMD Kaveri APU details and release date announced

AMD Kaveri APU details and release date announced

33 Comments

Discuss in the forums Reply
jrs77 12th November 2013, 11:54 Quote
All nice and dandy, but the CPU-part of this chip is what bothers me the most.

The 512 SPs of the IGP might be the same as on a HD7750, but it only has access to a shared DDR3 memory instead of GDDR5, so it won't come even close to a HD7750 in reality.

Also, we're quiet possibly speaking of a 135W-part for the A10-7850k, which isn't going to be cooled silently in a small box like a mITX-HTPC.

And as I said upfront, I'll be interested when the CPU-part of this chip can come close to the performance of an i5-xxxx, which the current A10-6800k simply doesn't.
Corky42 12th November 2013, 11:56 Quote
Why does it seem odd that the GPU speed is 720MHz, when the 7750 is 825MHz and the next gen consoles are around 800Mhz. Im assuming Kaveri shares some similarity with the way the custom made jaguar chips share GPU and CPU memory, or am i way off with my assumption ?
Hustler 12th November 2013, 12:16 Quote
..And news of desktop FX CPU's is where?

At the end of the day, these FM2+ are for budget builds, I'm only interested if AMD are still going to release a proper, unlocked CPU for those who want a genuine alternative to Intel.

Unlocked, 4Ghz, 8 core Steamroller if you please AMD.....
jrs77 12th November 2013, 13:27 Quote
Quote:
Originally Posted by Hustler
..And news of desktop FX CPU's is where?

At the end of the day, these FM2+ are for budget builds, I'm only interested if AMD are still going to release a proper, unlocked CPU for those who want a genuine alternative to Intel.

Unlocked, 4Ghz, 8 core Steamroller if you please AMD.....

Will never happen again, that AMD develops and manufactures CPUs for the high-end-desktops. AMDs whole focus is on APUs and their new low-power ARM-based server-chips.

They've given up competing with intel basically.
Snips 12th November 2013, 13:42 Quote
I completely get it that AMD want to go for the lower end of the build spectrum. I just wish there was a single voice, from within the bowels of AMD that can stand up and say "Remember when we made stuff that were cool? Can we do that again?"
rollo 12th November 2013, 13:42 Quote
Was not sure if AMD was continuing with its Fx range. Thought they were moving onto other things with there module designed cpus.

This is still 28nm so there is not huge gains to be had in reality. Expect this to be 5-10% better in graphics than the last version with similar cpu capabilities.
GuilleAcoustic 12th November 2013, 14:25 Quote
Do not forget GCN, Mantle and true Audio addition ... this could make a nice little box. I'm wondering how Mantle enabled games will perform on this.
bawjaws 12th November 2013, 15:01 Quote
How Mantle performs in general is the $64,000 question, isn't it? Personally, I think it looks promising, but anyone buying AMD for Mantle alone is taking a leap of faith at this point.
GeorgeStorm 12th November 2013, 15:03 Quote
I would very much like them to release a higher power chip on this chipset (or any other in itx form!)

Just built myself a FM2 based build since I didn't want to wait and I got a good deal, but I'll be keeping an eye on these.
SAimNE 12th November 2013, 16:22 Quote
Quote:
Originally Posted by GuilleAcoustic
Do not forget GCN, Mantle and true Audio addition ... this could make a nice little box. I'm wondering how Mantle enabled games will perform on this.
plus with hsa if you get a card it can crossfire with and 2gb+ of gddr5 you can easily get a huge level of performance(probably would be able to find non-ref cards that double memory or something)
azazel1024 12th November 2013, 19:26 Quote
Mmmm, CPU might be holding you back then though. 2 module/4 core is not a whole lot of processing power. Though it might be enough to get good frame rates in games if you got a 7750 and stuck it in crossfire with this thing.

Just...well, I guess it makes a fine budget machine.

The CPU is still just very, very sad (I think that works out to around 60-75% of a core i3 Ivy chip in single thread, depending on Steamrollers exact gains, and around 90-110% in multithreaded integer stuff).
SchizoFrog 12th November 2013, 19:42 Quote
For a mATX or mITX build I would still rather save up and then pay for an ASUS 760Ti Mini with pretty much any £80-£180 Intel CPU. Yes it costs more but you get much more performance and that build should last for years while still being able to build a small build.

For me, these APUs are only interesting when it comes to laptops and the potential for an NUC size build.
GuilleAcoustic 12th November 2013, 20:41 Quote
Quote:
Originally Posted by azazel1024
Mmmm, CPU might be holding you back then though. 2 module/4 core is not a whole lot of processing power. Though it might be enough to get good frame rates in games if you got a 7750 and stuck it in crossfire with this thing.

Just...well, I guess it makes a fine budget machine.

The CPU is still just very, very sad (I think that works out to around 60-75% of a core i3 Ivy chip in single thread, depending on Steamrollers exact gains, and around 90-110% in multithreaded integer stuff).
Quote:
Originally Posted by SchizoFrog
For a mATX or mITX build I would still rather save up and then pay for an ASUS 760Ti Mini with pretty much any £80-£180 Intel CPU. Yes it costs more but you get much more performance and that build should last for years while still being able to build a small build.

For me, these APUs are only interesting when it comes to laptops and the potential for an NUC size build.

It all depends on what you need. Why paying nvidia + intel price if all you'll ever need is an APU ?
jrs77 12th November 2013, 22:06 Quote
Quote:
Originally Posted by GuilleAcoustic
It all depends on what you need. Why paying nvidia + intel price if all you'll ever need is an APU ?

That's right. For alot of people the AMD APUs are basically all they need for their office and multimedia-tasks.

If you're working on your machine - let alone playing games - then there's no way around an intel CPU currently.
Assassin8or 12th November 2013, 23:33 Quote
Quote:
Originally Posted by jrs77
The 512 SPs of the IGP might be the same as on a HD7750, but it only has access to a shared DDR3 memory instead of GDDR5, so it won't come even close to a HD7750 in reality.

Also, we're quiet possibly speaking of a 135W-part for the A10-7850k, which isn't going to be cooled silently in a small box like a mITX-HTPC.

I shouldn't think it would be that high. The HD7750 is a sub 75W part itself and can be passively cooled. The CPU is also not a particularly high wattage part either.
Quote:
Originally Posted by jrs77
Will never happen again, that AMD develops and manufactures CPUs for the high-end-desktops. AMDs whole focus is on APUs and their new low-power ARM-based server-chips.

They've given up competing with intel basically.


Which is very sad indeed for the rest of the industry. In time, with profitability, and if there is still money to be made in the x86 business, AMD could return in force, but it would take a ballsy CEO and some outstanding engineers to pull the company out of the mire into which their x86 business has sunk.

I would have loved to have seen AMD continue pushing cores for example. Not the way that they have with the Bulldozer architectures, but real high IPC cores. You may say that we don't need such large numbers of cores, but servers do and it's a nice side benefit to get additional cores on the high end desktop.

Intel have stopped pushing cores so much on the desktop, instead focusing on mediocre 10% IPC improvements and 100MHz speed bumps year to year and integrated GPUs that are never used by most people that build custom machines. Even the new X79 parts are the lowest end of the IB-E parts.
Quote:
Originally Posted by bawjaws
How Mantle performs in general is the $64,000 question, isn't it? Personally, I think it looks promising, but anyone buying AMD for Mantle alone is taking a leap of faith at this point.

I took a punt on Mantle when I saw the HD7970 prices come down. It wasn't the only consideration, but it was certainly part of it;along with boosting my folding output massively.
Quote:
Originally Posted by Snips
I just wish there was a single voice, from within the bowels of AMD that can stand up and say "Remember when we made stuff that were cool? Can we do that again?"

This so much!
jrs77 13th November 2013, 07:23 Quote
Quote:
Originally Posted by Assassin8or
I shouldn't think it would be that high. The HD7750 is a sub 75W part itself and can be passively cooled. The CPU is also not a particularly high wattage part either.

If you look at the current A10-6800k and the current FX then a 125-135 Watt isn't that high of an estimate tbh.
Quote:
Which is very sad indeed for the rest of the industry. In time, with profitability, and if there is still money to be made in the x86 business, AMD could return in force, but it would take a ballsy CEO and some outstanding engineers to pull the company out of the mire into which their x86 business has sunk.

Yes it is sad, as intel doesn't need to improve much either, allthough they actually do quiet alot imho. Especially shrinking their nodes further and further with 14nm to come in 2014.
And allthough the processing-power doesn't increase much, but the overall-performance gets better this way nevertheless. And they've shown with their Iris Pro chips, that they can actually built a good APU to begin with.
Quote:
I would have loved to have seen AMD continue pushing cores for example. Not the way that they have with the Bulldozer architectures, but real high IPC cores. You may say that we don't need such large numbers of cores, but servers do and it's a nice side benefit to get additional cores on the high end desktop.

Intel have stopped pushing cores so much on the desktop, instead focusing on mediocre 10% IPC improvements and 100MHz speed bumps year to year and integrated GPUs that are never used by most people that build custom machines. Even the new X79 parts are the lowest end of the IB-E parts.

More than 4 cores aren't that much of interest for the absolute majority of desktops. Only those who do alot of rendering are really in need of as many cores as possible, but these people usually work with render-nodes to offload the work to a second machine purely ment for the task.

The thing I'm mostly interested in is performance/Watt and singlethread-performance. And in this area intel beats AMD since the introduction of the first Core2Duo.
AMD could for example develop on more efficient CPUs to compete with intel on the performance/Watt-area, but they don't seem to have any interest in that for desktop-parts and focus on that area only in the server-market with their new multicore ARM-based chips.
Harlequin 13th November 2013, 09:00 Quote
Iris pro good? really? its a larger die than the GTX 660!!

horrensous cost as well - AMD have the market here , and tbh who actually cares at the latest i7 - games are really GFX limited now.
Gareth Halfacree 13th November 2013, 09:14 Quote
Quote:
Originally Posted by jrs77
More than 4 cores aren't that much of interest for the absolute majority of desktops. Only those who do alot of rendering are really in need of as many cores as possible, but these people usually work with render-nodes to offload the work to a second machine purely ment for the task.
Unless you're a Linux/BSD user, in which case the more cores the better. I use a quad-core chip, and would really like an eight-core when I next upgrade - because it has a direct impact on how quickly I can get things done.

Perfect example: let's say I'm compressing backups. While the traditional bzip2 application is single-threaded, I use lbzip - which is multi-threaded with a pretty nearly linear gain in performance, meaning what would have taken an hour is done in just 15 minutes. What about when I'm reprocessing PDFs to reduce the resolution of the embedded images for posting on the web? Again, normally that'd be a single-threaded operation - but using GNU Parallel to drive Ghostscript means I can run four instances at the same time on my list of PDFs to be processed, again finishing the job in around a quarter the time it would normally take. If I had an eight-core chip, I'd be getting these jobs done in an eighth the time.

Sure, if you're running *Windows* then anything above a quad-core might be a waste except for selected specialist scenarios, but don't tar all computer users with the same brush. My AMD chip might be weak in IPC, but it's a damn sight faster for my workloads than an equivalently-priced dual-core Intel part. S'why I bought it, after all.
Bindibadgi 13th November 2013, 10:02 Quote
As soon as AMD can offload its FPU computations to future GCN, there will be no concern over 'core' count. ALU will do the mundane tasks and you'll have tons of FPU for everything else.

It's a shame AMD hasn't committed to a '6-core' FM2+ though, and just whacked up the TDP for shits-n-giggles. They won't win awards but enthusiasts wouldn't care. FX9000 series still sold, and FM2+ has TrueAudio/PCIe/etc
Harlequin 13th November 2013, 10:40 Quote
Quote:
Originally Posted by Bindibadgi
As soon as AMD can offload its FPU computations to future GCN, there will be no concern over 'core' count. ALU will do the mundane tasks and you'll have tons of FPU for everything else.

It's a shame AMD hasn't committed to a '6-core' FM2+ though, and just whacked up the TDP for shits-n-giggles. They won't win awards but enthusiasts wouldn't care. FX9000 series still sold, and FM2+ has TrueAudio/PCIe/etc


can mantle do it? or can even this do it on an A88x board?


what about onboard GDDR5?
jrs77 13th November 2013, 20:25 Quote
Quote:
Originally Posted by Harlequin
Iris pro good? really? its a larger die than the GTX 660!!

horrensous cost as well - AMD have the market here , and tbh who actually cares at the latest i7 - games are really GFX limited now.

Iris Pro isn't bigger as a GTX660. The whole APU-package is.

And yes, I do care about singlethread-performance alot actually, and you can aswell look at allmost every rendering-benchmark to get an idea why an i5-4670k is most likely allways better than any quad-core from AMD.

I never spoke of the latest i7 there, but about the reasonable $200-parts from intel.
Quote:
Originally Posted by Gareth Halfacree
Unless you're a Linux/BSD user, in which case the more cores the better. I use a quad-core chip, and would really like an eight-core when I next upgrade - because it has a direct impact on how quickly I can get things done.

Perfect example: let's say I'm compressing backups. While the traditional bzip2 application is single-threaded, I use lbzip - which is multi-threaded with a pretty nearly linear gain in performance, meaning what would have taken an hour is done in just 15 minutes. What about when I'm reprocessing PDFs to reduce the resolution of the embedded images for posting on the web? Again, normally that'd be a single-threaded operation - but using GNU Parallel to drive Ghostscript means I can run four instances at the same time on my list of PDFs to be processed, again finishing the job in around a quarter the time it would normally take. If I had an eight-core chip, I'd be getting these jobs done in an eighth the time.

Sure, if you're running *Windows* then anything above a quad-core might be a waste except for selected specialist scenarios, but don't tar all computer users with the same brush. My AMD chip might be weak in IPC, but it's a damn sight faster for my workloads than an equivalently-priced dual-core Intel part. S'why I bought it, after all.

I speak generally ofc, and I allways try to keep in mind what the absolte majority of desktop/notebook-users is using their machnes for. And in that case we're speaking allmost exclusively of Windows-based PCs running nothing more than an office-suite, a mediaplayer for HD-content and maybe a few tools to edit a homevideo. PCs used for playing some taxing games are allready a very small minority of some 5-10% and another 5-10% is actually running some demanding software like 3d-software.

We people in these forums usually forget about the fact, that we're a neglectable minority for the hardware-vendors when itt comes to volume-sales.
Harlequin 13th November 2013, 21:31 Quote
actually it was the HD 7790 that's smaller than the HT3e

http://hexus.net/tech/reviews/graphics/53185-sapphire-amd-radeon-hd-7790/
160mm2 for the 7790 core

and

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/4


174mm2 for GT3e (according to anandtech)


seems Intel have along way to go to catch up with AMD in this deprtment.

I wonder how much of a loss leader Intel are taking for actually selling these chips at all....
rollo 13th November 2013, 22:46 Quote
None at all as they are only sold to apple. The iris pro was made for apple by apples request after all. Not even certain if the chip is not just for apple exclusively.

4770k is faster than every AMD chip last I checked even in multi threaded work loads. If you actually need to make cash from your computer then it's an auto buy if not after the faster x79 chips.

If your after an APU then AMD are a good buy .
Bindibadgi 14th November 2013, 01:46 Quote
Quote:
Originally Posted by Harlequin
can mantle do it? or can even this do it on an A88x board?


what about onboard GDDR5?

It's nothing to do with motherboard chipset. Onboard GDDR5 isn't going to happen outside special orders like the PS4. Maybe Ultrabooks - because that's the only scenario you'd have a fixed amount of memory - but GDDR is not exactly low power so that's a fail too.
Harlequin 14th November 2013, 08:53 Quote
Quote:
Originally Posted by rollo
None at all as they are only sold to apple. The iris pro was made for apple by apples request after all. Not even certain if the chip is not just for apple exclusively.

4770k is faster than every AMD chip last I checked even in multi threaded work loads. If you actually need to make cash from your computer then it's an auto buy if not after the faster x79 chips.

If your after an APU then AMD are a good buy .

and a 4770k is more expensive for the cpu alone than an entire amd apu system.


and define ` faster` - other than the od exception - when you have a reasonable gfx card , cpu stops being a limiting factor. anand proved this. An 8350 is perfectly great for dual cards.
Gareth Halfacree 14th November 2013, 10:06 Quote
Quote:
Originally Posted by rollo
If you actually need to make cash from your computer then it's an auto buy if not after the faster x79 chips.
Eh? I scrape by quite nicely on my APU. How would an Intel chip - which would cost me several times as much to buy - earn me more money?
GuilleAcoustic 14th November 2013, 10:43 Quote
Quote:
Originally Posted by Gareth Halfacree
Eh? I scrape by quite nicely on my APU. How would an Intel chip - which would cost me several times as much to buy - earn me more money?

I think he ment : "Those working in 3D rendering". But in these case, I'd build a small rendering farm using inexpensive and small nodes rather than FAT expensive X79 or core extreme CPU.

For anything else, an APU is pretty much what anyone need, throw RAM according to your need and an SSD to make it snappier. I saw personnal computer birth (read: consumer computer) .... and I'll prolly see it's death (under its current form). I really think that the era of fat CPU, GPU and big boxes will end in short to mid times. SOC is the future, could you like it or not this is where we are heading.

Demanding computing tasks will be offloaded to servers or farms. Home computing / entertainment will move to a single architecture. Console, tablet, smartphone, PC and even connected TV ... they all are offering quite the same features (internet, gaming, youtube, social network and emails). This is quite a big redundandcy and sooner or later all this world will fuse into something unique. Maybe I'm mistaking myself, but this is who I see the upcoming form of "computing".

The only difference between console, PC, tablet, etc. ... relies in maximum achievable performances. Once computer all had a sound card, a netword card, a video card, etc .... now computer are heading to motherboard (which house network and sound) + cpu (with onboard gpu, memory controler, etc.) ... the step before embedding the whole chipset inside the CPU is pretty small and console are closer and closer to computer architecture. Time will tell :)
rollo 14th November 2013, 12:57 Quote
For example.

Those needing there computer for Maths equations, 3d Rendering, Photo / Video editing. Code compiling. I dont know your own use of a computer gareth to say if a Intel chip would make you more money. If you are doing the news for bit tech on a day to day basis then it would not.

What you said has already happened guille at least in the casual sector. The Ipad has took alot of sales from the pc desktop sector. Even in my own household im the only one who still uses there PC on a regular basis 2 laptops just collect dust. Easier to just use a Ipad for general browsing.

If AMDs APUs were launched before the whole Tablet revolution in the casual sector and they were sold to the 2 major brands in lenova and dell then you never know what would of happened.

Instead the Ipad feels quicker than most pcs that do not have a SSD in the tasks it can do. Ill always say the biggest mistake companies made was to not force SSDs into cheaper pcs. As they make the system so much quicker than any cpu would.
Gareth Halfacree 14th November 2013, 14:50 Quote
Quote:
Originally Posted by rollo
Those needing there computer for Maths equations, 3d Rendering, Photo / Video editing. Code compiling. I dont know your own use of a computer gareth to say if a Intel chip would make you more money. If you are doing the news for bit tech on a day to day basis then it would not.
Correct, it wouldn't. That goes for a goodly chunk of the PCs around today, as well - and it's already been mentioned upthread how unlikely it is that markets where it makes a real and immediate difference to the bottom line are doing the rendering locally anyway. Just look at Nvidia's Grid: virtualised GPUs for offloading your rendering remotely, so you don't *need* a kick-ass workstation at your desk.

Video editing is a good example of the sort of workload where having a good wodge of local compute power is important, and where spending more now will save you money in the long run. Code compiling? Arguable. It's not like you can't be working on something else while your code compiles, and you spend far more time looking at the IDE with your CPU idling than actually burning code. Photo editing? That's RAM-dependent, not CPU - my APU copes quite admirably with me editing print-resolution images, and while certain intensive operations may complete marginally quicker on an Intel chip it would be many years before those savings add up to break-even on the cost difference.

You're dismissing the overwhelming majority of the PC market - those who *don't* do large amounts of video editing, local 3D rendering and the like. You're claiming that the edge-cases who do are the majority, which is so wrong-footed as to be ridiculous. It's not the case that "if you actually need to make cash from your computer then [Intel is] an auto buy," nor that only AMD fanboys buy AMD as you claimed. For the overwhelming majority of the market, an AMD chip will allow them to "make cash from [their] computer" exactly as quickly as an Intel chip - unless you're claiming an Intel chip will help me type faster - but, potentially, at a lower capital expenditure and total cost of ownership. For your edge cases, sure, but next time you fancy arguing the point have a quick look at the comparative sizes of the overall desktop market and the professional workstation market and you'll see just how small a percentage those edge cases make up.

TL;DR: Don't make sweeping generalisations that can be easily disproved.
GuilleAcoustic 14th November 2013, 14:56 Quote
Quote:
Originally Posted by rollo
Those needing there computer for Maths equations, 3d Rendering, Photo / Video editing. Code compiling.

This is a very very very very marginal percentage of home computer usage. Even code compiling can be done on "low end" processor. Most compilation is incremental now, no need for full rebuild each time you hit the F9 key (or what ever it is :D).

Math equation and 3D rendering can be happily offloaded to a farm, and this is what I'd do if I was living from this. I wouldn't like to wait for the render to end before I can continue using the computer.

Photo and video editing is more about RAM. Filters, for a huge part of them, are multi-threadable and thus offloadable. Video rendering / compression is also multi-threadable --> multi-threadable, but localy due to the amount of data to transfer.

I'd love to see FPGA working along side a SOC. The FPGA could be reprogrammable on the fly and thus provide a "CPU" matching what you're processing. Need a video compressing CPU ? Flash the FPGA. Need a (de)crypting specialized CPU ? Flash the FPGA etc etc etc.

Edit: oh .... a you can remote develop too. I know this is ultra marginal and people will think that I'm mentally ill ... but I've developped and compiled from my phone (Motorola Droid, with full landscape keyboard) using SSH to connect to my desktop home computer. VI was perfectly usable, only the keyboard was preventing fast typing. Then compiling was done as usual with the make command.
PCBuilderSven 14th November 2013, 16:44 Quote
Quote:
Originally Posted by GuilleAcoustic

oh .... a you can remote develop too. I know this is ultra marginal and people will think that I'm mentally ill ... but I've developped and compiled from my phone (Motorola Droid, with full landscape keyboard) using SSH to connect to my desktop home computer. VI was perfectly usable, only the keyboard was preventing fast typing. Then compiling was done as usual with the make command.

Ha, remote compiling, why would you want to do that? My Nokia N900 can run make, gcc, g++, etc on device so you can code and run your code right there, quite usefull at points when away from a desktop. While I've never tried it myself, it can reportedly also cross compile for x86 as well (although at that point I don't see why you would compile on a desktop as well). Alternatively just code in python and be done with compiling :D
GuilleAcoustic 14th November 2013, 17:25 Quote
Quote:
Originally Posted by PCBuilderSven
Ha, remote compiling, why would you want to do that? My Nokia N900 can run make, gcc, g++, etc on device so you can code and run your code right there, quite usefull at points when away from a desktop. While I've never tried it myself, it can reportedly also cross compile for x86 as well (although at that point I don't see why you would compile on a desktop as well). Alternatively just code in python and be done with compiling :D

Since most source code files are archived and versioned (SVN, GIT, etc.) somewhere on a server nowaday, we could imagine remote compiling them too. Pretty useful when all you have is a thin client or when you are on the go or lack the processing power.

Executable are pretty lightweight most of the time, compared to the compiling time sending it back to you is nothing. While being remotely compiled, you can do something else with your computer.

Python is not what I need, sorry. Nice scripting language but no use for what I do (C/C++ with no screen). Each object compilation can be distributed between several cores / CPU / machine, you then just have to gather all the .o files and bind everything together (but maybe GCC already works this way).
jrs77 15th November 2013, 18:22 Quote
If you're seriously into 3d-rendering, then you go buy a small server with as many cores as possible. Something like a G34-board with two octacores for just under €1000 (only board + 2 CPUs).

Anyways, for the professional @ home, who does alot of video-editing or DTP (primarily AdobeCS) a combination of an intel CPU + HD7750 is still the best option currently. Such a system isn't really expensive and if you have a business, then you can write it off the taxes anyways.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums