1. SK Hynix Starts Mass Production of 192GB Memory Module for Nvidia's Vera Rubin
SK Hynix said on Monday it has begun mass production of its 192-gigabyte SOCAMM2 memory module, a low-power AI server part built on the Korean firm's sixth-generation 10-nanometre-class DRAM. The module delivers more than double the bandwidth of conventional server memory and uses 75 per cent less power, and it is tailored for Nvidia's Vera Rubin accelerator platform, which is due to ship in the second half of the year. Nvidia will also source the part from Samsung and Micron, spreading supply across three vendors as demand for training memory continues to exceed capacity.
Why it matters: Korea now supplies the scarcest single ingredient in the Nvidia stack, and SK Hynix just beat rival Samsung to the starting line on a chip destined for every large cloud and sovereign data centre being built across Asia this year. Enterprise buyers in Singapore, Japan and India planning 2027 AI infrastructure should assume high-end server memory prices stay firm through next year, and that capacity for Vera Rubin systems will be allocated first to hyperscalers who already have SOCAMM2 supply agreements in place.
2. Alibaba Releases Qwen 3.6 Max Preview to Challenge Frontier Models
Alibaba on Monday rolled out Qwen3.6-Max-Preview, the strongest model yet in its Qwen series, claiming top scores across six coding benchmarks including SWE-bench Pro and Terminal-Bench 2.0. The preview is available through Alibaba Cloud's Bailian platform and Qwen Studio, with an API that is compatible with both OpenAI and Anthropic specifications, allowing developers to swap it into existing pipelines without rewriting code. The company also released Fun-ASR 1.5, a voice model covering 30 languages, and trailed a further unnamed launch for Tuesday.
Why it matters: Alibaba is positioning Qwen as the default open alternative for developers across Southeast Asia who want frontier capability without US export friction, and the OpenAI and Anthropic API compatibility removes the main lock-in that usually protects incumbents. For regional enterprises in Indonesia, Vietnam and Thailand that run coding agents on local cloud, Qwen 3.6 Max now offers a credible option hosted closer to users and priced in renminbi, which matters as AI inference costs become a line item on quarterly budgets.
Read more: https://decrypt.co/364948/alibaba-qwen-3-6-max-preview-most-powerful-model
3. TSMC Raises 2026 Outlook as AI Chip Demand Lifts Profit 58 Per Cent
TSMC reported first-quarter net income of NT$572 billion, a 58 per cent jump on last year and ahead of analyst forecasts, and raised its full-year revenue guidance to growth of more than 30 per cent in US dollar terms. Chief executive CC Wei described AI chip demand as "extremely robust" and said supply still cannot keep up with orders, with second-quarter revenue now expected to land between US$39 billion and US$40.2 billion. Capital expenditure is set to trend toward the upper end of the existing US$56 billion ceiling, giving the foundry more capacity for 2 nanometre and advanced packaging later this year.
Why it matters: TSMC is the single clearest barometer of the AI buildout, and a raised 2026 outlook signals the hyperscaler capex cycle in China, Japan and India is still accelerating rather than plateauing. For Asia-based chip designers, equipment suppliers and data centre developers, this confirms another 12 to 18 months of tight foundry capacity, pushing pricing leverage toward the supply side and leaving latecomers scrambling for 2027 allocations.
Read more: https://www.cnbc.com/2026/04/16/tsmc-q1-profit-58-percent-ai-chip-demand-record.html