Find The Best Job Vacancies in Various industry sectors 27092+ Job Vacancy


Apply jobs • Apply directly to companies • Clear salary ranges

Browse 27092 List Available Job Vacancies Today. We Have Worked with 2000+ Trusted Companies around the world


DC's China Chip Divide

Reuters ·

(Artificial Intelligencer is published every Wednesday. Think your friend or colleague should know about us? Forward this newsletter to them. They can also subscribe here.)

Both of this year's mega M&A deals have landed squarely in one corner of tech: cybersecurity, with AI on the minds of both buyers.

This morning, Palo Alto Networks made headlines with a $25 billion cash-and-stock deal to acquire identity security leader CyberArk - the company's largest deal to date - as it races to deliver a comprehensive suite of cybersecurity products amid surging AI-driven demand.

Palo Alto CEO Nikesh Arora attributed the mega deal to the rise of AI and the explosion of machine identities, which he said made it clear that the future of security must be built on the vision that "every identity requires the right level of privilege controls."

It's the second-largest deal ever for an Israeli tech company, coming just months after Google's $32 billion purchase of cloud security firm Wiz.

AI threats have become the defining theme in security, Wiz CTO Ami Luttwak told me. From deepfake impersonations to automated phishing to rapid-fire websites created by vibe coding, companies are finding themselves needing to deal with an ever-growing volume of AI-derived software being launched at unprecedented speed - and that means security tools need to evolve just as fast. This is fueling a shift from manual, service-based approaches to real-time detection and automated protection.

As enterprises look to streamline vendors after numerous breaches have exposed the limits of patchwork security, don't be surprised if more consolidation is on the way.

In this week's newsletter, we'll dive into the divided perspectives coming out of Washington on how America exports its AI technology, take a closer look at new data on where ChatGPT gets its knowledge, and explore the latest AI model architecture that's creating a buzz among researchers.

OUR LATEST REPORTING ON TECH AND AI

Exclusive - Nvidia orders 300,000 H20 chips from TSMC due to robust China demand

Google to sign EU's AI code of practice despite concerns

Trump administration to supercharge AI sales to allies, loosen environmental rules

Chinese AI firms form alliances to build domestic ecosystem amid US curbs

Voice actors push back as AI threatens dubbing industry

'It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people

DC'S CHINA CHIP DIVIDE

American lawmakers can't seem to agree on how best to use the nation's powerful AI technology to shape the world order. The debate in Washington - marked by two sharply different visions for U.S. AI regulation and export controls - has Nvidia's NASDAQ:NVDA valuable chips at the center, as America navigates its most important bilateral relationship of the century: with China.

For now, the camp that favors a more open approach appears to have the upper hand. In a dramatic policy reversal, the Trump administration lifted a previous ban and allowed Nvidia to resume sales of its H20 GPUs to China. The logic, as White House national economic adviser Kevin Hassett put it, is to maintain America's technological edge: if China's not buying chips from the U.S., then they're innovating and making their own.

That's the view Nvidia CEO Jensen Huang has been advocating as well. His company - the most valuable in the world - still earns a mid-single-digit percentage of its revenue from China (though that number used to be much higher). His vision, which positions U.S.-made chips, software, and cloud infrastructure as the backbone of global AI development, has won key allies, including investor-turned-White House AI and crypto czar David Sacks. This coalition ultimately helped push for the policy reversal on chip sale bans.

This moment reminds me of covering tech in China during the 2010s, when American companies still had a foothold in the country's massive market and tech leaders regularly lobbied Washington to keep those doors open. Today, with the exception of Apple, most U.S. tech CEOs have all but disappeared from China's local market and rarely advocate for access to it.

The impact on Nvidia is already visible. Our exclusive reporting reveals that Nvidia placed an order for 300,000 H20 chipsets with Taiwan's TSMC just last week - a move driven by unexpectedly strong demand from China. It was enough for Nvidia to realize that relying on its existing inventory wouldn't be enough.

Still, some lawmakers from both sides of the aisle and former national security officials are pushing back hard against the administration's move to ease chip restrictions. In a letter this week, they argued the decision would likely weaken the effectiveness of export controls and encourage Beijing to seek more concessions from Washington. There are also concerns that the move could give Beijing a critical advantage, especially in military AI and surveillance.

Either way, China isn't waiting around. Chinese AI companies have formed new industry alliances to foster a self-sustaining tech ecosystem. Huawei has just rolled out its new AI computing system in Shanghai, which some say could rival Nvidia's most advanced chips - a clear sign that China is investing heavily in self-reliance and innovation to bridge any gaps left by U.S. policy.

CHART OF THE WEEK

Many people learn things from ChatGPT now, but where does ChatGPT learn its knowledge? AI startup Profound analyzed 10 million citations on ChatGPT from August 2024 to June 2025, and the results are pretty revealing. ChatGPT shows a clear preference for Wikipedia, which accounts for nearly half (47.9%) of its top 10 most-cited sources (not the total citations across the entire dataset). It also relies on media outlets like Forbes, Reuters, and Business Insider to provide more up-to-date information. OpenAI, the company behind ChatGPT, has been actively striking deals with media companies to crawl and cite their content. In comparison, Google's AI Overview leans more heavily into the Google ecosystem, with almost 19% of its top sources coming from YouTube.

WHAT AI RESEARCHERS ARE READING

By Kenrick Cai, Tech Correspondent

What comes after the Transformer model, the "T" in ChatGPT? A research paper this month introduced "mixture-of-recursions" (MoR for short), billed as a potential alternative to the popular Transformer models, which were developed by Google researchers in 2017 and went on to form the technical basis for the current AI race.

Google is a contributor on the MoR paper, in collaboration with Canadian AI research institute Mila, the Universite de Montreal, and the Korea Advanced Institute of Science and Technology.

MoR builds upon transformer technology in two key ways. Put simply, transformers process text through a series of steps, each step building upon the last with a new set of instructions to further refine the AI's understanding of the text's meaning; MoR involves a smaller number of steps, but repeated multiple times. And while standard transformers process each word with the same depth, MoR involves a technique where simple words go through fewer computations and complex words go through more.

In research demonstrations, the paper authors showed that MoR can drastically improve the efficiency of AI models. That means faster results with less computing power necessary, a potential huge deal for the industry as AI costs continue to balloon. Google parent Alphabet raised its 2025 capex projections from $75 billion to $85 billion last week, citing the need to build more data centers and servers.

But as with many research experiments, the authors cautioned that more testing is needed to validate whether their results will track at a large scale. The paper tested AI models up to 1.7 billion parameters; OpenAI's GPT-4, in comparison, is reported to have well over 1 trillion parameters.