How New Artificial Intelligence (AI) Chip Goes To Sense And Process The Elements ?
This is a research article 'Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence'. All Credit For This Research Goes To Apple Researchers, Checkout the paper and MIT post.
Every year, a significant amount of waste is generated from electronic gadgets such as cellphones, smartwatches, and other wearable devices that are replaced with newer models. It would have been revolutionary if previous models could be upgraded with new sensors and processors that snap onto the device’s internal chip, reducing waste in monetary and material terms. MIT engineers have developed a stackable, reconfigurable artificial intelligence chip to make this far-fetched dream a reality. This LEGO-like brick can be added to an existing structure, thereby reducing electronic waste. The team of researchers was also supported by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea, the Korea Institute of Science and Technology (KIST), and the Samsung Global Research Outreach Program. Their accomplishments have also been published in Nature Electronics.
The device comprises alternating layers of sensor and processing elements and LEDs that allow the layers of the chip to communicate optically. Traditional wiring is used in older chip architectures to transmit signals between layers. Because these sophisticated connections cannot be easily broken and rewired, reconfiguring them is nearly impossible. Rather than using traditional physical wires, the scientists employed light to send data through the device. This allows the device to be easily reconfigured with layers that can be swapped out or piled on for various functions such as sensing light, pressure, etc. The chip can already do basic image recognition tasks because its architecture that comprises stacking image sensors, LEDs, and processors created from artificial synapses. The scientists trained a combination of image sensors and artificial synapse arrays to recognize certain letters in their most recent construction. This allows communication between the layers to take place without the need for a physical link. Because the actual wire connection has been replaced with an optical communication system, designers now have the freedom to stack and add chips as needed.
Regarding future work, the researchers are particularly intrigued about the prospect of adopting this architecture to edge computing devices such as supercomputers or cloud-based computing, which will open up a whole new universe of possibilities. The need for multifunctional edge-computing devices will skyrocket as the internet of things takes off. The team feels that their proposed architecture can help with this because it offers much flexibility regarding edge computing. The researchers also intend to improve the chip’s sensing and processing capabilities so that it may be used to recognize more complicated images or placed in wearable electronic skin and healthcare monitors. According to the researchers, it would be fascinating if the consumers could themselves assemble the chip using alternative sensors and processing layers that might be offered separately. The consumer can also select from various neural networks based on their image or video recognition requirements.
Every year, a significant amount of waste is generated from electronic gadgets such as cellphones, smartwatches, and other wearable devices that are replaced with newer models. It would have been revolutionary if previous models could be upgraded with new sensors and processors that snap onto the device’s internal chip, reducing waste in monetary and material terms. MIT engineers have developed a stackable, reconfigurable artificial intelligence chip to make this far-fetched dream a reality. This LEGO-like brick can be added to an existing structure, thereby reducing electronic waste. The team of researchers was also supported by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea, the Korea Institute of Science and Technology (KIST), and the Samsung Global Research Outreach Program. Their accomplishments have also been published in Nature Electronics.
The device comprises alternating layers of sensor and processing elements and LEDs that allow the layers of the chip to communicate optically. Traditional wiring is used in older chip architectures to transmit signals between layers. Because these sophisticated connections cannot be easily broken and rewired, reconfiguring them is nearly impossible. Rather than using traditional physical wires, the scientists employed light to send data through the device. This allows the device to be easily reconfigured with layers that can be swapped out or piled on for various functions such as sensing light, pressure, etc. The chip can already do basic image recognition tasks because its architecture that comprises stacking image sensors, LEDs, and processors created from artificial synapses. The scientists trained a combination of image sensors and artificial synapse arrays to recognize certain letters in their most recent construction. This allows communication between the layers to take place without the need for a physical link. Because the actual wire connection has been replaced with an optical communication system, designers now have the freedom to stack and add chips as needed.
Regarding future work, the researchers are particularly intrigued about the prospect of adopting this architecture to edge computing devices such as supercomputers or cloud-based computing, which will open up a whole new universe of possibilities. The need for multifunctional edge-computing devices will skyrocket as the internet of things takes off. The team feels that their proposed architecture can help with this because it offers much flexibility regarding edge computing. The researchers also intend to improve the chip’s sensing and processing capabilities so that it may be used to recognize more complicated images or placed in wearable electronic skin and healthcare monitors. According to the researchers, it would be fascinating if the consumers could themselves assemble the chip using alternative sensors and processing layers that might be offered separately. The consumer can also select from various neural networks based on their image or video recognition requirements.
Comments
Post a Comment