0% found this document useful (0 votes)
33 views8 pages

Quoc Hung Vuong - Module 7-8

Nvidia leads the AI semiconductor industry with core technologies including GPUs, Tensor Cores, and CUDA, maintaining a competitive edge despite emerging alternatives like Google TPU and RISC-V architecture. Technical standards are crucial for compatibility and performance, with Nvidia adhering to various benchmarks and protocols to ensure market dominance. The company employs strategies like location economies and economies of scale to optimize production costs and expand globally, particularly in markets like China and India.

Uploaded by

hungvq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views8 pages

Quoc Hung Vuong - Module 7-8

Nvidia leads the AI semiconductor industry with core technologies including GPUs, Tensor Cores, and CUDA, maintaining a competitive edge despite emerging alternatives like Google TPU and RISC-V architecture. Technical standards are crucial for compatibility and performance, with Nvidia adhering to various benchmarks and protocols to ensure market dominance. The company employs strategies like location economies and economies of scale to optimize production costs and expand globally, particularly in markets like China and India.

Uploaded by

hungvq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Quoc Hung Vuong

Module 7-8
1. What is the dominant product technology used in the industry in which your company is
based?
Nvidia is currently the leader in the AI semiconductor and high-performance computing
industry. Here are the key product technologies that Nvidia holds in the industry.
First is the GPU. GPU is NVidia's core technology, originally intended for gaming graphics but
later used very effectively and optimized for AI and FPC. Currently, the company has launched
the RTX 5070, based on the Blackwell architecture, providing the same performance as the
RTX4090 but at a cheaper price of 549 USD, making it easier for many customers to access.
Along with that, the RTX 5090 launched in 2025 uses 92 billion transistors, has AI-enhanced ray
tracing capabilities and is 3 times faster than the RTX 4090. Also in 2025, Nvidia is expected to
launch the Blackwell GPU Architecture to serve AI training and inference with 5 times the
performance of the H100 - an important product that will be mentioned below.
The next technology is AI Accelerators & Tensor Cores, which is the technology that has helped
Nvidia optimize and make breakthroughs in AI. The H100 Tensor Core PGU (released in 2023)
is a product that has been widely used in AI. In addition, the Blackwell B100 launched in 2025
has accelerated AI Inference by 30 times, reduced power consumption by 40%, and DGX
SuperPOD - Nvidia's AI supercomputer system helps train AI models like GPT 4 5 times faster.
Not only that, Nvidia also supports sub-8-bit data formats, such as MXFP6 and MXFP4, through
its Blackwell architecture with 5th generation Tensor Cores, enhancing both performance and
accuracy in AI computations.
In Nvidia's technologies, CUDA (Compute Unified Device Architecture) is an indispensable
software that helps Nvidia dominate the AI market. CUDA supports programmers using GPUs to
accelerate AI, deeplearning and HPC algorithms. There are currently more than 4 million AI
developers using this software, and it is a barrier to entry for Nvidia's competitors.D. HBM
(High Bandwidth Memory)
And finally, HBM3 (High Bandwidth Memory 3) is a high-speed memory technology that
Nvidia is currently utilizing, offering significantly higher bandwidth compared to GDDR6. A
single HBM3 stack can achieve up to 819 GB/s, whereas GDDR6 typically ranges from 448
GB/s to 768 GB/s, providing up to twice the bandwidth depending on configuration. Moreover,
in 2025, Nvidia's Blackwell GPUs are expected to support HBM3E, an enhanced version of
HBM3, further accelerating AI processing performance. HBM4, however, is projected to be
introduced with the next-generation Rubin architecture, anticipated to arrive in 2026.
In summary, Nvidia's core product technologies are GPUs, Tensor Cores, CUDA along with
HBM and AI accelerators. GPUs and CUDA are Nvidia's main competitive advantages at
present, making Nvidia the top choice in AI Training and one of the first choices in AI inference.
However, there is still fierce competition from Google TPU, AI ASIC and RISC-V, but in the
short term, Nvidia still maintains its position.
2. Are technical standards important in your industry? If so, what are they?
The AI semiconductor industry is a high-tech field, and technical standards certainly play a key
role in ensuring compatibility between hardware, software, and the AI ecosystem. For Nvidia,
compliance and maintenance of technical standards is a key factor in success in the GPU, AI
accelerator, and data center markets.
First, it is the software standard and AI programming platform. Coming to Nvidia's CUDA, it is
a proprietary software standard. This is a proprietary GPU programming platform developed by
Nvidia, which acts as an application programming interface (API), software development kit
(SDK) and cuDNN, TensorRT, RAPIDS..., helping to optimize performance on Nvidia GPUs.
And thanks to its popularity, this is a competitive advantage for Nvidia and can be considered the
de facto industry standard in AI. Next is ONNX & OpenAI API, an open AI software standard
that enables AI to run on a variety of hardware platforms (such as Nvidia, AMD, and Intel). This
is a set of application programming interfaces (APIs) for AI models such as GPT and DALL·E,
allowing AI systems to run on a variety of hardware platforms, thereby reducing dependence on
proprietary hardware such as CUDA, making the AI ecosystem more flexible. Currently, ONNX
is developed by Microsoft and a group of large technology companies including Nvidia also
participate in support. And Nvidia GPUs are still the most optimal hardware for AI models using
ONNX.
Next is the MLPerf benchmark - measuring AI performance. The MLPerft Benchmark is an
objective benchmark used to evaluate important performance in AI. This benchmark helps
customers and businesses compare AI GPUs between manufacturers, thereby knowing which
products are suitable for which data centers. From there, it helps shape the AI market. For
example, the Nvidia H100 achieved the highest inference performance in MLPerf 2024, far
surpassing the AMD Instinct MI300X.
PCI Express (PCIe), which is a GPU connection standard, considered a high-speed connection
protocol between GPUs, CPUs and other peripherals in a computer system. This standard is
managed by PCI-SIG. PCIe 5.0 & 6.0 both provide high bandwidth, meeting the need for fast
data transmission for AI and HPC, and Nvidia GPUs such as the H100 series, RTX 40 and the
upcoming Blackwell B100 all use PCIe to connect to the system. Not only that, Nvidia also has
NVLink, a proprietary GPU connection technology that increases data transmission speed
between GPUs compared to PCIe. Specifically, NVLink 5.0 is 5 times faster than PCIe 5.0,
allowing multiple Nvidia GPUs to connect together to form a powerful AI supercomputer. This
technology is used in AI supercomputers, data centers and HPC such as the Nvidia DGX GH200
system. As we can see, this is both a technology and a standard for evaluating GPU connection
speed.
Then is JEDEC memory standards managed by the JEDEC Solid State Technology Association.
JEDEC sets graphics memory standards such as GDDR6, GDDR6X, and GDDR7, ensuring
compatibility and performance across manufacturers. This standard ensures that Nvidia's GDDR
memory adheres to a global standard, making GPUs compatible with other hardware.
The fifth standard is IEEE, which helps set important standards in telecommunications, HPC,
and GPU-CPU interconnection for optimal AI computing, and is managed by the IEEE
Standards Association. All Nvidia GPUs adhere to IEEE's high-performance data interconnect
standards to ensure their reliability. This standard helps support high-speed Ethernet protocols
and is used for AI supercomputers, so it's important.
And finally, there's the ISO/IEC security and compliance standard, managed by the International
Organization for Standardization. This standard helps ensure the security and safety of AI
systems and data on GPUs. Nvidia must strictly adhere to ISO/IEC 27001 for AI data
management and ISO 26262 for manufacturing AI GPUs used in autonomous vehicles. Failure to
comply with these standards could lead to a host of legal issues in the future.
As such, technical standards are crucial in the semiconductor industry, and Nvidia needs to
comply with the above standards to ensure that its GPUs perform optimally and are compatible
with the global AI ecosystem.

3. Where is the dominant technology in your industry on its S-Curve? Are alternative
competing technologies being developed that might give rise to a paradigm shift in your
industry?
Currently, in the traditional S-Curve, Nvidia's AI GPU technology (currently the dominant
technology in the AI hardware market) is in a saturated phase. Initially, AI GPUs developed
rapidly with breakthroughs in hardware and software, specifically CUDA, Tensor Cores, and
high-bandwidth memory HBM. But when reaching the inflection point, the pace of innovation
began to slow down. And now, AI GPUs are falling into a state of diminishing returns. Because
every time we want to improve and upgrade, we need huge R&D costs and there are signs that
the technology has begun to approach its natural limits. Specifically, with up to 85% of the
global AI GPU market, there is almost no room for Nvidia to expand its market share, and most
potential customers have used Nvidia products. Although Nvidia's revenue increased by a
whopping 112% in Q3/2024, the main reason for the revenue increase was the AI boom and the
strong demand for AI GPUs, not technological breakthroughs. And new products such as B100,
B200 have improvements but are not breakthroughs compared to H100, which is quite different
from the significant improvement of H100 compared to A100.
Although AI GPUs have reached their peak, Nvidia still maintains its market dominance thanks
to the following factors. First is the CUDA software ecosystem, major AI frameworks such as
TensorFlow or PyTorch have tightly integrated with CUDA, making it a default choice in the
industry. Next, Nvidia has increased hardware performance, with HBM3 providing 6 times more
bandwidth than GDDR6 and NVLink enabling GPU connections 5 times faster than PCIe 5.0.
From there, we can see that Nvidia still has advantages and technology gaps compared to
competitors. Not only that, Nvidia has also expanded into cloud AI services, when more than 100
businesses are deploying Nvidia DGX Cloud to train AI models, thereby reducing hardware
dependence. Therefore, up to now, Nvidia is still dominating this market.
However, there are some technologies that have the potential to threaten the AI GPU monopoly
and could lead to a paradigm shift in the industry. The first is AI Accelerators & Google TPU.
Designed specifically for AI Inference, it can process AI better than GPUs in some specific
tasks. TPU v5, although it is behind in AI training, is already faster than Nvidia GPUs in
inference. So if the AI training part is improved, or if it only focuses heavily on AI inference, the
possibility of a paradigm shift could be 60%. Next is AI ASIC, with Tesla Dojo AI chip
optimized for self-driving car AI, already faster than H100 GPUs in some tasks and Amazon
Trainium 2 is a lower-cost AI training solution, which could threaten Nvidia's data center market
share. AI ASIC can replace AI GPUs in specialized areas, however, Nvidia still has the
advantage with the CUDA ecosystem. Next is the RISC-V AI Chips architecture, an open
instruction set architecture, not owned by any company other than x86 and ARM as this
architecture allows anyone to develop AI processors without paying royalties, thereby reducing
chip manufacturing costs and increasing flexibility for companies using it. This architecture is
currently used by SiFive and Alibaba to develop AI chips to reduce dependence on x86 and
ARM. This can help companies to design their own custom AI accelerators at a lower cost and
not necessarily depend on Nvidia GPUs. A specific example is DeepSeek AI, the company used
this architecture to train a powerful AI model at a cost of only $ 6 million, a figure much lower
than Nvidia's GPU AI solutions. And if AI systems can operate effectively without Nvidia
GPUs, it will certainly severely affect Nvidia's position in the long term. If we can improve some
of the limitations in AI Training or gradually shift the dependence away from the CUDA
ecosystem, then up to 90% of RISC-V AI Chips will create Paradigm Shift.
In the short term, a perfect replacement for NVidia GPUs is almost impossible, but in the long
term, Nvidia needs to be very vigilant and constantly innovate and adapt to the market, to ensure
that its position is not shaken.

4. Is your company creating value or lowering the costs of value creation by realizing
location economies, transferring distinctive competencies abroad, or realizing cost
economies from economies of scale? If not, does it have the potential to?
Nvidia is known for its huge revenue and huge profit margins. Currently, the company is
applying three very important strategies: location economies, Transferring Distinctive
Competencies Abroad and Economies of Scale. With Location Economies, Nvidia has expanded
its R&D centers and tapped into talent in many places around the world such as the US,
Germany, Israel, UK, India and China. For example, Nvidia has partnered with Indian companies
at the Bengaluru AI research center to develop lower-cost AI software. Along with that, the
company has not produced itself but focused on design, and cooperated with companies in
regions with lower production costs. Specifically, TSMC (Taiwan) and Samsung (Korea) are the
companies responsible for producing most of the GPUs to reduce costs and optimize the supply
chain, or cooperate with Vietnam and Malaysia to manufacture AI chips to take advantage of
lower labor costs. Nvidia has committed to investing 4-4.5 billion USD in 4 years to move part
of its product chain to Vietnam. Not only that, Vietnamese companies such as Viettel have
invested 100 million USD, FPT invested 200 million USD to jointly develop AI data centers and
AI cloud ecosystems with Nvidia. Not only that, Nvidia also aims for European Chips Act tax
incentives when expanding production in Germany and the Netherlands, helping Nvidia reduce
15% of production costs by optimizing production locations.
Next, Nvidia has transferred its special capabilities abroad when expanding AI and GPU
technology internationally and cooperating with global technology corporations. Specifically,
cooperating with the Middle East, AbuDhabi AI Lab to provide GPUs and AI software to these
corporations. Or cooperating with Mercedes-Benz to provide AI technology for self-driving cars
and with Tata Group (India) to support AI applications in Indian industry. And especially in the
Chinese market, despite the difficulties of export ban policies, Nvidia still maintains revenue by
exporting lower-end chip versions. Since then, Nvidia's international revenue has increased by
20%, helping the company to diversify its markets and not depend on the US market alone.
Finally, economies of scale, Nvidia has mass produced AI GPUs to reduce costs. Statistics show
that with large orders, more than 4.5 million H100 GPUs have been shipped since 2023, helping
Nvidia achieve the largest scale advantage in the industry. Not to mention the CUDA ecosystem
with more than 4 million AI programmers using special tools such as Open AI, Microsoft or
Meta, has helped Nvidia maintain a competitive position.
From the above factors, we can see how Nvidia is making good use of its strategies to reduce
costs and increase revenue.

5. What strategy is your company pursuing to compete globally? In your opinion, is this the
correct strategy, given cost pressures and pressures for local responsiveness?
Currently, Nvidia is pursuing a flexible globalization strategy, combining multinational and
global strategies. From this strategy, Nvidia can expand the market, optimize the supply chain
and adapt to local requirements, helping to maintain its position.
First, Nvidia has expanded its AI and data center markets by focusing on AI and cloud
computing. Nvidia has partnered with the UAE government to provide AI GPUs for the national
AI development strategy. Along with that, Nvidia has allied with Tata Group to expand its AI
market share in Asia. From there, we can see that Nvidia is expanding its AI market share
globally, helping to diversify its revenue rather than just focusing on revenue from the US,
Europe, and China.
Not only that, Nvidia always optimizes its supply chain to reduce costs. Specifically, the
company has shifted its production chain to reduce costs and avoid geopolitical risks. When
Nvidia has invested in and cooperated with factories in Vietnam, Malaysia and Germany, where
labor costs are low and avoid trade conflicts between the US and China. Not only that, the
company also invested in manufacturing in the US to reduce its dependence on Taiwan.
Specifically, Nvidia expanded its manufacturing facilities in Arizona (TSMC) and Texas and the
company also received $15 billion from the CHIPS Act to develop domestic AI chip production.
From there, Nvidia will minimize supply chain risks, minimize production costs, and ensure no
disruption to the supply chain.
And finally, Nvidia adapts to local requirements, especially for the Chinese market. When the
US issued export policies for high-end chips, Nvidia produced versions of the A800 GPU for
China, or Nvidia adjusted AI software according to EU regulations to comply with data
protection laws there. In addition, Nvidia also cooperates with domestic enterprises to expand the
market, such as cooperating with Viettel and FPT in Vietnam to deploy AI, or agreements with
European carriers to provide AI for telecommunications, helping Nvidia penetrate the European
market.
With the above strategies, Nvidia focuses heavily on data centers and AI software, helping to
maintain its leading position in the AI market, while reducing production costs and minimizing
the risks of geopolitical tensions while optimizing the global supply chain and being able to meet
local needs, helping Nvidia penetrate temperamental markets such as China and Europe. From
there, we can see that the above strategies are currently very correct and very suitable for Nvidia.

6.What major foreign market does your company serve, and what mode has it used to
enter this market? Why is your company active in these markets and not others? What are
the advantages and disadvantages of using this mode? Might another mode be preferable?
When it comes to the foreign markets that Nvidia is serving, it is impossible not to mention the
Chinese market, which accounts for 12% of Nvidia's total global revenue, equivalent to $13.5
billion in the last 4 quarters up to early 2025. Nvidia had accounted for 90% of the AI chip
market share in China before the US imposed export restrictions. Nvidia mainly uses the direct
export method to supply goods to Chinese companies. The strength of this method is low cost,
quick access but is dependent on US-China trade policy. Nvidia can think of a joint venture with
Chinese technology companies, to help Nvidia maintain its AI market share in China without
violating US law, however, the terms of cooperation will certainly be a big challenge for both
sides.
The second largest market is Europe with high AI GPU demand from data centers, scientific
research and industrial applications. Automotive, financial and AI companies in Germany, the
UK and France are investing heavily in AI and HPC systems. Nvidia has exported directly and
has entered into strategic partnerships with European governments to provide its technology.
With this approach, Nvidia has penetrated a larger and less risky market than China, not to
mention being supported by tax incentives and favorable trade policies, but Nvidia still has to
compete with European technology companies in some market segments. Nvidia can consider
opening a Wholly Owned Subsidiary in Europe to demonstrate a stronger and clearer presence in
the region.
Next is INDIA, where the AI market is growing fastest with huge demand for AI GPUs and
cloud computing infrastructure. Here, Nvidia has entered into a strategic partnership with the
Tata Group to provide its technology. The advantages are high growth potential, large AI human
resources but no strong AI ecosystem like in the two markets mentioned above.
Finally, the UAE, the AU hub of the Middle East. Nvidia has directly exported AI GPUs to the
UAE government and large corporations, along with strategic cooperation to provide AI GPUs
for the national AI computing system. Currently, the market here is booming, less competitive
and strongly supported by the government, but Nvidia needs long-term investment such as
building an AI center here to fully exploit the market and consolidate its position.
Nvidia uses direct exports and strategic cooperation to distribute AI GPUs in major markets such
as China, Europe, India and the UAE, and is expected to expand investment in Europe and India,
while seeking to maintain market share in China through strategic cooperation.

References:

1. Bloomberg. (2024). Nvidia partners with UAE for AI development. Retrieved from
https://www.bloomberg.com/nvidia-uae-ai-partnership-2024
2. Barrons.com. (2025). Nvidia's revenue from China and global market share. Retrieved
from https://www.barrons.com
3. Bloomberg. (2024). UAE’s AI Strategy and Nvidia’s involvement in the Middle East AI
market. Retrieved from https://www.bloomberg.com
4. Coughlin, T. (2025, January 10). CES 2025 NVIDIA storage and memory product
requirements. Forbes. Retrieved from
https://www.forbes.com/sites/tomcoughlin/2025/01/10/ces-2025-nvidia-storage-and-
memory-product-requirements/
5. Diễn đàn Doanh nghiệp. (2025, January 4). Nvidia invests over $1 billion in AI startups
in 2024. Retrieved from https://diendandoanhnghiep.vn/nvidia-dau-tu-hon-1-ty-usd-vao-
cac-startup-ai-vao-nam-2024-10148496.html
6. Financial Times. (2025). Nvidia expands supply chain in Asia. Retrieved from
https://www.ft.com/nvidia-expands-supply-chain-asia-2025
7. Financial Times. (2025). Nvidia’s expansion into Vietnam and its role in AI
development. Retrieved from https://www.ft.com
8. Investor’s Business Daily. (2025). Nvidia’s market dominance in AI GPUs and its
global expansion strategy. Retrieved from https://www.investors.com
9. MLCommons. (2024). MLPerf Benchmark results for AI training and inference.
Retrieved from https://mlcommons.org/mlperf-benchmarks/
10. Mot Cuoc Song. (2024, December 12). NVIDIA accelerates AI investment with its
global strategy in 2024. Retrieved from https://motcuocsong.vn/nvidia-day-manh-dau-tu-
ai-2024/
11. Nasdaq. (2025, January). Nvidia was one of the largest companies by market cap in
2024 — Will it continue to succeed in 2025?. Retrieved from
https://www.nasdaq.com/articles/nvidia-was-one-largest-companies-market-cap-2024-
will-it-continue-succeed-2025
12. Nghiên cứu Chiến lược. (2024, December 12). Why is NVIDIA investing in research
and development in Vietnam?. Retrieved from https://nghiencuuchienluoc.org/dieu-gi-
khien-nvidia-dau-tu-vao-nghien-cuu-va-phat-trien-tai-viet-nam/
13. Nhip Song Kinh Doanh. (2025). Vietnam’s role in Nvidia’s supply chain diversification
and AI production hub. Retrieved from https://www.nhipsongkinhdoanh.vn
14. Nvidia Blog. (2024). NVLink 5.0: The next evolution in GPU connectivity. Retrieved
from https://blogs.nvidia.com/
15. Nvidia Newsroom. (2024). Nvidia invests $50 billion in Germany for new fab. Retrieved
from https://nvidianews.nvidia.com/news/nvidia-invests-50-billion-germany-fab
16. Nvidia Newsroom. (2024, March 18). NVIDIA Blackwell platform arrives to power a
new era of computing. Retrieved from https://nvidianews.nvidia.com/news/nvidia-
blackwell-platform-arrives-to-power-a-new-era-of-computing
17. RISC-V International. (2025). RISC-V AI Chips and the future of open-source
computing. Retrieved from https://riscv.org/
18. Reuters. (2024). Nvidia expands R&D in India. Retrieved from
https://www.reuters.com/technology/nvidia-expands-rd-india-2024-11-22
19. Reuters. (2025, January 13). Nvidia faces revenue threat from new U.S. AI chip export
curbs. Retrieved from https://www.reuters.com/technology/artificial-intelligence/nvidia-
says-new-rule-will-weaken-us-leadership-ai-2025-01-13/
20. Semiconductor Industry Association. (2024). CHIPS Act and its impact on
semiconductor production. Retrieved from https://www.semiconductors.org/
21. The Leader. (2024, December 12). Vietnam after Nvidia’s investment effect. Retrieved
from https://www.theleader.vn
22. Thegioitiepthi.danviet.vn. (2024, December 23). Why investors are watching Nvidia
stock in 2025?. Retrieved from https://thegioitiepthi.danviet.vn/vi-sao-nha-dau-tu-ngong-
cho-co-phieu-nvidia-trong-nam-2025-20241223094332838-d29891.html
23. VietnamPlus. (2025, January 3). Nvidia leads global market capitalization growth in
2024. Retrieved from https://www.vietnamplus.vn/nvidia-dan-dau-tang-truong-von-hoa-
toan-cau-nam-2024-post1005514.vnp
24. VietStock. (2025, January). Nvidia loses nearly $600 billion in market cap due to
China's DeepSeek AI. Retrieved from https://vietstock.vn/2025/01/cu-soc-nvidia-von-
hoa-boc-hoi-ky-luc-gan-600-ty-usd-trong-mot-ngay-vi-ai-trung-quoc-773-1267322.htm

You might also like