(Courtesy of Getty Images) Samsung Electronics Co. will launch low latency wide (LLW) DRAMs next year, designed to improve the power efficiency of artificial intelligence applications by 70% more than that of regular DRAMs, the company said at its annual investors forum last week.
LLW DRAMs will become its flagship, next-generation chips and be embedded into AI devices such as extended reality headsets. Samsung aims to beef up artificial intelligence chip foundry sales to about 50% of its total foundry sales in five years.
The new AI chips will enhance data processing speed and capacity by increasing the number of input/output terminals (I/O) in a semiconductor circuit, compared with existing DRAMs.
It increases the processing speed and data processing capacity of electronic devices by stacking processors such as the central processing unit (CPU) and graphics processing unit (GPU) vertically with high-bandwidth memory (HBM) chips.
Samsung employees hold semiconductor wafers made with 3-nanometer technology 3 NM PROCESS
Samsung also will further sharpen its 3-nanometer chip processing technology, currently the industry’s smallest and most advanced process node, to be suitable for AI applications.
“We can increase (memory chips’) performance by 2.2 times every two years,” said Jeong Ki-bong, vice president of Samsung Foundry, the foundry semiconductor business of Samsung Electronics, told investors at the IR event.
A semiconductor wafer GDP STRATEGY
At the investors forum, Samsung presented the initials of GAA, dram and packaging, or GDP, as the keyword for its new strategy.
GAA, short for gate-all-around, reduces the leakage current of processors with a circuit width of 3 nm or below. It is Samsung’s key architecture to develop next-generation DRAMs and packaging technology.
The 3 nm technology is the first process node on which Samsung applied its first-generation GAA structure transistors.
Waymo's robot taxi in pilot operation in Phoenix, Arizona Samsung recently hired YoonJung Ahn, a design head at Waymo, Alphabet’s autonomous driving technology unit, as executive vice president to lead the design management center.
The chipmaker also plans to develop a 4 nm AI accelerator, a high-performance computing machine used to process AI workloads.
According to market research firm Omdia, the AI semiconductor market, estimated at $55.3 billion this year, is expected to reach $112 billion by 2027.
We use cookies to provide the best user experience. By continuing to browse this website, you will be considered to accept cookies. Please review our Privacy Policy to learn our cookie policy.