Vision··5 min read

Send a Bug Report Video to Claude Without Hitting the File Limit

Claude and ChatGPT don't accept video — but they accept animated GIFs. Here's the workflow to turn a 30-second screen recording into a sub-megabyte GIF an AI can actually analyze.

You've recorded a screen capture of a UI bug — a button that doesn't fire, a layout that breaks at a specific viewport, an animation that stutters. You want Claude or ChatGPT to look at it and suggest a fix. The chat upload rejects the .mov file. The .mp4 too. Even the renamed .webm.

That's by design: most chat interfaces only accept still images. The workaround is the animated GIF — technically a sequence of frames, formally an image, accepted everywhere. Done right, you can send 5 seconds of bug repro in under a megabyte.

The conversion target

For AI vision, aim for:

  • Width: 480 px (smaller is fine for code; UI screenshots benefit from a bit more)
  • Frame rate: 12 fps (enough for motion, half the file size of 24 fps)
  • Duration: 3–8 seconds (trim aggressively to the moment of the bug)
  • File size: under 1 MB (most chat APIs accept up to ~5 MB but downscale anyway)

The workflow

  1. Open Video to GIF for AI Vision.
  2. Drop your MP4, MOV, or WebM. First conversion downloads ffmpeg (~30 MB), cached after.
  3. Set start and end seconds — keep only the moment of the bug.
  4. Set width to 480, fps to 12, target file size to 1 MB.
  5. Download and drag straight into your AI chat.

The bug-report prompt template

I'm reporting a UI bug. The attached GIF shows the repro.

Expected behavior:
[what should happen]

Observed behavior:
[what actually happens — describe even though the GIF shows it]

Stack:
[framework, version, browser, OS]

Relevant code:
[paste the suspected component]

Please:
1. Describe what you see in the GIF.
2. Hypothesize the cause.
3. Suggest a fix or a debugging step to narrow it down.

Why describe the bug if the GIF shows it?

Vision models are good but not perfect. Telling Claude what's wrong gives it a target to verify against the visual — much more reliable than expecting it to spot the bug unprompted. You'll get faster, better answers.

When a GIF still isn't enough

For long recordings (5+ minutes of repro steps), don't make a giant GIF. Make several short ones — one per phase of the repro — and send them in sequence. Claude can hold them all in context and reason about the flow.

For audio bugs (clicks, dropouts, narration), GIF won't help. Transcribe the audio with a tool like Whisper, clean it with Transcript Cleaner, and paste alongside a screenshot of the visual state. The model can then correlate the timestamps.

One-shot videos for product feedback

The same workflow works for design feedback ("does this onboarding feel laggy?"), accessibility audits ("walk through this with screen reader X"), and copy reviews. Anything you'd normally explain in a Loom can become a GIF that an AI summarizes in seconds.

Tools mentioned

Frequently asked

Why doesn't Claude support video directly?

Most chat APIs only accept still images. Animated GIFs are technically a sequence of images and slip through. Native video support exists in Gemini and the Claude API, but not the chat UIs as of mid-2026.

What's the right frame rate for AI?

12 fps is enough for the model to understand motion. Higher rates burn file size without improving comprehension.

Can I include audio?

Not in a GIF. For audio bugs, transcribe the audio separately and paste it alongside the visual GIF.

Keep reading