Google AI Can Design Efficient Computer Chips in a few Hours

By: | August 21st, 2020

Google has tested its machine learning AI in yet another demanding field, which is the design of computer chips. According to the relevant reports that come from the tech giant, the artificial intelligence excelled on the given task, showcasing the capacity to design efficient chips in under six hours. This is an amazing feat considering that the machine had never done this before, wasn’t specifically programmed to do it, and just went through a typical training session that involved looking into billions of relevant datasets.

Having the ability to place transistors on a chip in the most efficient way possible isn’t a simple task. There are logic gates and memory controllers that also need to find their place on the canvas, while power optimization and placement density are also crucial factors to consider.

One cannot just cram as many transistors as possible and expect the processor to work reliably. This is why chip-makers are going through years-long optimization cycles, trying to squeeze every possible drop of performance out of a particular architectural generation. For example, Intel has released three chip series from 2015 until 2020, based on the 14nm MOSFET node technology.

Google AI looked at real chip designs so it didn’t have to begin from scratch every time. Moreover, the researchers tested out setting specific needs like chip block size constraints, timing-critical factors, and power-related limitations.

Why that matters? In practice, tech startups could quickly derive chip designs that were made specifically for the coverage of their needs and requirements, so they could place the order at a fab and get something tailor-made. This would also take away the need to pay a large price for using someone else’s chip, which often also includes costs for licenses on technologies that aren’t needed at all.

All of this could take layout design and routing experts out of the chip design equation, saving both costs and time, making the testing process simpler, and ending up with a processor design that has much higher chances of working as expected without the need for many revisions.

Bill Toulas

More articles from Industry Tap...