- Get Into AI
- Posts
- Is this Chinese model the next big thing?
Is this Chinese model the next big thing?
+ Gemini 2.5 Pro completes Pokemon Blue

GM! Welcome to Get Into AI.
I’m your AI news sherpa. Guiding you through the mountains of news to get to the good bit.
Here’s what I have for today:
Claude Gets Integrations and Remote MCPs
Gemini 2.5 Pro Completes Pokémon Blue
Qwen Models Show Impressive Performance
First, a word from our sponsors:
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
Alright, let’s dive in!
Three major headlines
Three main stories for the day.
1/ Claude Gets Integrations and Remote MCPs
Claude can now connect to your world with integrations and advanced Research features, currently in beta for Max, Team, and Enterprise plans, with Pro support coming soon.
The big news is that Claude now supports remote MCPs (Machine-Controlled Processors) via HTTP streaming (SSE), which surprised many community members despite Anthropic releasing support for remote capabilities in their protocol recently.
What's more interesting is that Atlassian has launched their own hosted remote MCP server, establishing a pattern for MCP clients that connects to first-party remote MCPs and manages OAuth for permissions.
The community is now eagerly waiting for the first developer program to offer revenue sharing to app creators, which could be a game-changer for the ecosystem.
Either way, the open-source model space just got a major power-up!

2/ Gemini 2.5 Pro Completes Pokémon Blue
In gaming AI news, Sundar Pichai announced that Google's Gemini 2.5 Pro model has successfully completed Pokémon Blue.
This feat was accomplished using an autonomous agent system that processes the game state through an emulator, converting screenshots and RAM data into gridded representations that Gemini can understand and respond to.
What makes this particularly impressive is the modular pipeline that allows the AI to invoke task-specific agents like pathfinding algorithms when needed.
Even with a massively over-leveled Blastoise (level 80+), the AI still had to navigate resource constraints like running out of PP (power points) for its water-type moves during crucial battles.
The community is now discussing future benchmarks like completing the entire Pokédex or attempting a zero-human-assistance run.

3/ Qwen Models Show Impressive Performance
The Qwen model family is turning heads with performances that seem almost too good for their parameter counts.
Community members report that models ranging from 0.6B to 32B parameters are punching well above their weight, with the tiny 0.6B model performing similarly to Llama 3.2 3B on benchmarks, and the 8B model matching GPT-4o in some tests.
On the deployment front, quantized versions of Qwen3 models (including 14B and 32B) have been released in AWQ and GGUF formats, enabling usage on limited GPU memory.
There's also excitement about Qwen3 235B being available on the Together AI API.
For those looking to run these models locally, Unsloth now supports fine-tuning of Qwen3 models with up to 8x longer context length than previous setups, with Qwen3-30B-A3B fitting in just 17.5GB VRAM.

Catch you tomorrow! ✌️
That’s it for this week, folks! If you want more, be sure to follow our Twitter (@BarunBuilds)
🤝 Share Get Into AI with your friends!
Did you like today's issue? |
Reply