As reported by SlashGear: When MediaTek announced its deca-core moble processor, it almost seemed insane in a world that's very much settled on octa-cores. The chip maker, however, has nothing on the silicon produced by researchers at the Department of Electrical and Computer Engineering at the University of California, Davis. Although it definitely won't fit inside a smartphone, tablet, or even a laptop for that matter, the chip boasts of being the world's first kilo-core processor. That's 1,000 processing cores at your service, making even the beefiest gaming rig cry in shame.
Of course, you probably won't be using it for that gaming, or any other consumer purpose. It's still something that exists inside controlled conditions of a laboratory, but it is nonetheless an achievement worth bragging about. According to electrical and computer engineering professor Bevan Baas, the highest number of cores ever achieved in a multi-core chip has been 300. This UC Davies chip is easily more than thrice that many.
That's not its only bragging right either. Each processor is like an island of its own and can run a tiny program independently of others. This would be akin to a "Multiple Instruction, Multiple Data" architecture that is more flexible than current Single Instruction, Multiple Data (SIMD) used by most modern commercial processors these days.
And there's more to it than that. Almost like the "True Octa Core" feature MediaTek flaunted a few years ago, each processor can power itself down when not in use, so you aren't exactly going to be using 1,000 times the power. In fact, the chip can be powered by a AA battery.
In terms of specs, the cores operate at 1.78 GHz and has been clocked to process 1.78 trillion instructions per second. A special feature of the chip is that the cores can send and receive data directly to each other instead of having a common memory pool, like an L-level cache, which would have been a bottleneck instead of a speed increase in this situation. The chip itself was fabricated by IBM using a much older 32nm process. As for what the chip can be used for, it could, if it ever becomes mass produced and stable, be a favorite among media processing, scientific, and encryption circles. Basically anything that requires processing tons of data in parallel and in break neck speeds.
Of course, you probably won't be using it for that gaming, or any other consumer purpose. It's still something that exists inside controlled conditions of a laboratory, but it is nonetheless an achievement worth bragging about. According to electrical and computer engineering professor Bevan Baas, the highest number of cores ever achieved in a multi-core chip has been 300. This UC Davies chip is easily more than thrice that many.
That's not its only bragging right either. Each processor is like an island of its own and can run a tiny program independently of others. This would be akin to a "Multiple Instruction, Multiple Data" architecture that is more flexible than current Single Instruction, Multiple Data (SIMD) used by most modern commercial processors these days.
And there's more to it than that. Almost like the "True Octa Core" feature MediaTek flaunted a few years ago, each processor can power itself down when not in use, so you aren't exactly going to be using 1,000 times the power. In fact, the chip can be powered by a AA battery.
In terms of specs, the cores operate at 1.78 GHz and has been clocked to process 1.78 trillion instructions per second. A special feature of the chip is that the cores can send and receive data directly to each other instead of having a common memory pool, like an L-level cache, which would have been a bottleneck instead of a speed increase in this situation. The chip itself was fabricated by IBM using a much older 32nm process. As for what the chip can be used for, it could, if it ever becomes mass produced and stable, be a favorite among media processing, scientific, and encryption circles. Basically anything that requires processing tons of data in parallel and in break neck speeds.
Can you imagine combining this technology with Google's Tensor processors, or with deeply trained neural network systems?