Saturday, January 31, 2026

FDA Ruling Paves Way for Pharmacy Substitution of Yuflyma for Humira

Celltrion's Yuflyma receives FDA approval for interchangeability with Humira, boosting market confidence and distribution in the U.S.

Tesla’s Market Cap Dips Below $1.1 Trillion After Setback from Chinese Rival BYD

Tesla's stock fell over 6% after BYD's robotaxi announcement, dropping its market cap to $1.057 trillion and 8th place in rankings.

Can Your AI Draw a Pelican on a Bike? This Test Says a Lot

FutureCan Your AI Draw a Pelican on a Bike? This Test Says a Lot
Amazon Nova\'s depiction of Pelican on a Bicycle [Photo courtesy of Simon Willison\'s Weblog]
Amazon Nova’s depiction of Pelican on a Bicycle [Photo courtesy of Simon Willison’s Weblog]

A new benchmark for evaluating artificial intelligence (AI) models has emerged: the Pelican on a Bicycle drawing test, proposed by engineer Simon Willison. On Monday, GigaGen reported Willison’s latest analysis, presented at the AI Engineer World’s Fair in San Francisco.

The first notable performance came from Amazon’s AI model, Nova, which was launched last November.

Willison tasked Amazon’s three text generation models – Nova Micro, Nova Lite, and Nova Pro – with drawing a pelican on a bicycle. The results were disappointing, with the images being nearly indecipherable.

Meta’s AI models also fell short of expectations.

While Meta’s earlier Llama 3.1 405B model could somewhat depict a bicycle and a pelican, the newer Llama 3.3 70B failed to represent either of them accurately. Despite Llama 3.3’s ability to operate more cost-effectively with 70 billion parameters, its performance in this test lagged significantly behind its predecessor.

OpenAI’s GPT-4.1 series, including GPT-4.1 Mini and GPT-4.1 Nano, produced unstable bicycle images, yielding unsatisfactory results.

Anthropic\'s Claude 3.7 Sonnet\'s depiction of Pelican on a Bicycle [Photo courtesy of Simon Willison\'s Weblog]
Anthropic’s Claude 3.7 Sonnet’s depiction of Pelican on a Bicycle [Photo courtesy of Simon Willison’s Weblog]

DeepSeek, however, showed remarkable improvement. Willison praised DeepSeek-R1 for its enhanced pelican depiction and easily recognizable bicycle imagery.

The standout performer was Anthropic’s Claude 3.7 Sonnet, which perfectly illustrated the Pelican on a Bicycle. Its rendition surpassed all others in accuracy for both the pelican and bicycle.

Lastly, Gemini 2.5 Pro Preview-05-06 impressed with its flawless pelican depiction. This model had previously scored 1499.95 in visual completeness and functionality at WebDev Arena, ranking first overall. This score outperformed Claude 3.7 Sonnet by about 17% and showed significant improvement over earlier Gemini versions.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles