Sony Explains How Its Groundbreaking 2-Layer CMOS Sensor Was Made

Sony has successfully created the first stacked CMOS image sensor technology in the world with two-layer transistor pixels. This doubles its light gathering capacity. Sony has now provided further details on how it was possible.

The Revolutionary Sensor

The new sensor from Sony separates the photodiodes (pixel transistors) that are usually placed on the same substrate onto different layers. This is explained in PetaPixel’s initial coverage. This sensor has a saturation signal level that is approximately twice as high, which is basically its light gathering ability. It dramatically increases dynamic range and reduces noise.

Sony specifically states that this technology will allow for better smartphone photography, without increasing the size of your smartphone sensor. New technology’s pixel structure allows pixels to preserve or improve existing properties, at smaller pixel sizes.

Sony shares details behind the creation of the sensor

This new information can be found in a video that was captured as part of the All Sony Semiconductor Solutions Group Event, “Sense the Wonder Day”, spotted and tagged by Sony Alpharumors.

Keiichi Nakazawa, second research division at Sony Semiconductor, discusses the new sensor and how Sony created it. Nakazawa oversees the research and development for new image sensors for mobile. This is why Sony’s latest technology is so revolutionary for smaller sensors.

Nakazawa explains that his team initially set out to create the ultimate pixel structure. They didn’t know what it would mean.

He says, “During this discussion we came to conclusion that both photo-diodes and transistors must provide the best performance.” “This resulted in the creation of the two-layer transistor photopixel.”

Nakazawa said that the sensor’s result was well-received, and that it has high expectations.

“Because the photo transistor and photo diodes are physically separated in this structure it is possible to optimize each one. There were expected improvements in pixel performance such as increased dynamic range and noise reductions. The device offers many additional functions and performance enhancements. The R&D organization is currently conducting various studies to support this idea.

Nakazawa shares one detail about the process: the heat required to form the new bonds was a significant challenge.

This technology allows stacked devices to connect multiple substrates to one pixel unit. This requires a alignment technology that is precise to a nanometer for both the photo diodes as well as the pixel transistors. This was possible using a process technology known as 3D sequential integration. New wafers are not bonded to each other during production.

After the photo diode has been formed, silicon wafers must be bonded. This is used to form pixel transistors. He explains that alignment accuracy is determined not by bonding but by the lithography. This allows for a precise alignment to be achieved.

The heat generated during the manufacturing process, after stacking the wafers, is a major problem with this technology. He continues, “While the heat resistance for bonding technology is typically 400 degrees Celcius in traditional structures, this new structure must be heat resistant to over 1,000 degrees Celcius.”

“To solve this problem we developed our bonding technology. Transistors have been designed to adapt to this structure.”

Sony has yet to announce when the new sensor will be available in consumer devices or when it plans on manufacturing it at scale. However, it has stated that it will continue to refine the design moving forward.