fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

10K
active users

To be completely fair, thread safety and atomics are advanced topics.

Several humans I have interviewed for engineering positions would also have a lot of trouble answering these questions. I couldn't write this code on a whiteboard without looking at the Rust library docs.

The main problem here is that the model is making up poor excuses to justify Arc<AtomicUsize>, showing poor reasoning skills.

Larger models like #GPT4 should do better with my #Rust #coding questions (haven't tried yet).

Google's Gemini Pro performs even worse than the opensource models running on my modest Linux desktop:

g.co/gemini/share/cdec7f5a6c5c

Missing from the chat log, is the last response in the image below 🤦‍♂️

I don't have Gemini Advanced / Ultra. Is it a bit smarter than this?

#google#gemini#llm

Today I tried running Codestral, a 22B parameter LLM tuned for coding by Mistral AI.

With my Rust mock interview questions, it performed better than all other offline models I tried so far.

paste.benpro.fr/?4eb8f2e158416

#coding#rust#llm

My AMD GPU running Codestral, a 22B parameter LLM.

The gaps in resource usage occur when the model is waiting for the next prompt from me.

With this setup, a response of 543 tokens took about 14.3 seconds (38 tokens/s).